Met Office Climate Data Challenge 2022

Daniel Ayers – d.ayers@pgr.reading.ac.uk  

The Met Office Climate Data Challenge 2022 was a two day virtual hackathon-style event where participants hacked solutions to challenges set by Aon (Wikipedia: “a British-American multinational professional services firm that sells a range of financial risk-mitigation products, including insurance, pension administration, and health-insurance plans”) and the Ministry of Justice (MoJ). Participants heralded from the Met Office and the universities of Reading, Bristol, Oxford, Exeter, Leeds and UCL. Here’s how I found the experience and what I got out of it. 

If your PhD experience is anything like mine, you feel pretty busy. In particular, there are multitudinous ways one can engage in not-directly-your-research activities, such as being part of the panto or other social groups, going to seminars, organising seminars, going to conferences, etc. Obviously these can all make a positive contribution to your experience – and seminars are often very useful – but my point is: it can sometimes feel like there are too few periods of uninterrupted time to focus deeply on actually doing your research. 

Fig. 1: There are many ways to be distracted from actually doing your research. 

So: was it worth investing two precious days into a hackathon? Definitely. The tl;dr is: I got to work with interesting people, I got an experience of working on a commercial style project (very short deadline for the entire process from raw data to delivered product), and I got an insight into the reinsurance industry. I’ll expand on these points in a bit. 

Before the main event, the four available challenges were sent out a few weeks in advance. There was a 2hr pre-event meeting the week beforehand. In this pre-meeting, the challenges were formally introduced by representatives from Aon and MoJ, and all the participants split into groups to a) discuss ideas for challenge solutions and b) form teams for the main event. It really would have helped to have done a little bit of individual brainstorming and useful-material reading before this meeting.  

As it happened, I didn’t prepare any further than reading through the challenges, but this was useful. I had time to think about what I thought I could bring to each challenge, and vaguely what might be involved in solutions to each challenge. I concluded that the most appropriate challenge for me was an Aon challenge about determining how much climate change was likely to impact insurance companies through changes to the things insurance companies insure (as opposed to, for example, the frequency or intensity of extreme weather events which might cause payouts to be required). In the pre-meeting, someone else presented an idea that lined up with what I wanted to do: model some change in earth and human systems and use this to create new exposure data sets (for exposure data set, read “list of things the insurance companies insure for, and how much a full payout will cost”). This was a lofty ambition, as I will explain. Regardless, I signed up to this team and I was all set for the main two-day event. 

Here are some examples of plots that helped us to understand the exposure data set. We were told, for example, that for some countries, a token lat-lon coordinate was used for all entries in that country. This resulted in some lat-lon coords being used with comparatively high frequency, despite the entries potentially describing large or distinct areas of land.  

The next two plots show the breakdown of the entries by country, and then by construction type. Each entry is for a particular set of buildings. When modelling the likely payout following an event (e.g. a large storm) it is useful to know how the buildings are made. 

One thing I want to mention, in case the reader is involved with creating a hackathon at any point, is the importance of challenge preparation. The key thing is that participants need to be able to hit the ground running in the event itself. Two things are key to this being possible.  

First, the challenge material should ideally provide a really good description of the problem space. In our case, we spent half of the first day in a meeting with the (very helpful) people from Aon, picking their brains about how the reinsurance industry worked, what they really cared about, what would count as an answer to this question, what was in the mysterious data set we had been given and how should the data be interpreted. Yes, this was a great opportunity to learn and have a discussion with someone I would ordinarily never meet, but my team could have spent more precious hackathon hours making a solution if the challenge material had done a better job of explaining what was going on.  

Second, any resources that are provided (in our case, a big exposure data set – see above), need to be ready to use. In our case, only one person in some other team had been sent the data set, it wasn’t available before the main event started, there was no metadata, and once I managed to get hold of it I had to spend 2-3 hours working out which encoding to use and how to deal with poorly-separated lines in the .csv file. So, to all you hackathon organisers out there: test the resources you provide, and check they can be used quickly and easily.  

By the end of the second day, we’d not really got our envisioned product working. I’d managed to get the data open at last, and done some data exploration plots, so at least we had a better idea of what we were playing with. My team mates had found some really useful data for population change, and for determining if a location in our data set was urban or rural. They had also set up a slack group so that we could collaborate and discuss the different aspects of the problem, and a GitHub repo so we could share our progress (we coded everything in Python, mainly using Jupyter notebooks). We’d also done a fair amount of talking with the experts from Aon, and amongst ourselves as a team, to work out what was viable. This was a key experience from the event: coming up with a minimal viable product. The lesson from this experience was: be ok with cutting a lot of big corners. This is particularly useful for me as a PhD student, where it can be tempting to think I have time to go really deep into optimising and learning about everything required. My hackathon experience showed how much can be achieved even when the time frame forces most corners to be cut. 

To give an example of cutting corners, think about how many processes in the human-earth system might have an effect over the next 30 years on what things there are to insure, where they are, and how much they cost. Population increase, urbanisation and ruralisation, displacement from areas of rising water levels or increased flooding risk, construction materials being more expensive in order to be more environmentally friendly, immigration, etc. Now, how many of these could we account for in a simplistic model that we wanted to build in two days? Answer: not many! Given we spent the first day understanding the problem and the data, we only really had one day, or 09:45 – 15:30, so 5 hours and 45 minutes, to build our solution. We attempted to account for differences in population growth by country, by shared socio-economic pathway, and by a parameterised rural-urban movement. As I said, we didn’t get the code working by the deadline, and ended up presenting our vision, rather than a demonstration of our finished solution. 

There might be an opportunity to do more work on this project. A few of the projects from previous years’ hackathons have resulted in publications, and we are meeting shortly to see whether there is the appetite to do the same with what we’ve done. It would certainly be nice to create a more polished piece of work. That said, preserving space for my own research is also important! 

As a final word on the hackathon: it was great fun, and I really enjoyed working with my team.  PhD work can be a little isolated at times, so the opportunity to work with others was enjoyable and motivating. Hopefully, next time it will be in person. I would recommend others to get involved in future Met Office Climate Data Challenges! 

NCEO Forum: Machine Learning and AI 

Laura Risley l.risley@pgr.reading.ac.uk and Ieuan Higgs i.higgs@pgr.reading.ac.uk

Wednesday 16th – Thursday 17th March 2022 

The National Centre for Earth Observation (NCEO) is a distributed NERC centre of over 100  scientists from UK universities and research organisations (https://www.nceo.ac.uk). Last month NCEO launched a new and exciting headquarters, the Leicester Space Park. After the launch, researchers from various institutions with affiliations to NCEO were invited to a forum at the new HQ. This was an introductory workshop in Machine Learning and Artificial Intelligence. We were both lucky enough to attend this in-person event (with the exception of a few remote speakers)!  

As first year PhD students, we should probably introduce ourselves:  

Laura – I am a Scenario student based in the Mathematics department, my project is ‘Assimilation of future ocean-current measurements from satellites’. This will involve applying data assimilation to assimilate ocean-current velocities in preparation for data from future satellites. My supervisor is also the training lead and co-director of Data Assimilation at NCEO. I was thrilled to be able to attend this forum to learn new techniques that can be used in earth observation. 

Ieuan – I am a Scenario Associate based in the Meteorology department. My project is titled ‘Complex network approach to improve marine ecosystem modelling and data assimilation’.  In my work, I hope to apply some complex-network-informed machine learning techniques to predict concentrations of the less well observable nutrients in the ocean, from the well observable quantities – such as phytoplankton! As a member and fundee of NCEO, I was excited to see a training event on offer that was highly relevant to my project. 

Machine Learning (ML) and Artificial Intelligence (AI) are often thought of as intimidating and amorphous topics. This fog of misconceptions was quickly cleared up, however, as the workshop provided a brilliant, fascinating and well-structured introduction into how these fields can be leveraged in the context of earth observation. 

Introduction to NCEO 

The forum began bright and (very) early on Wednesday morning at the Leicester Space Park. Our first day of ML training began with an introduction to NCEO by director – John Remedios, and training lead – Amos Lawless. We each had the opportunity to introduce ourselves and our research in a quick two-minute presentation. This highlighted the variety in both background and experience in an entirely positive way! As well as benefiting directly from the training itself, we enjoyed being in a room full of enthusiastic people with knowledge and niches aplenty. 

Introducing ourselves and our research  

Next, we had a talk from Sebastian Hickman, a PhD student at the University of Cambridge, who introduced his work on using ML with satellite imagery to detect tall-tree-mortalities in the Amazonian rainforest. A second talk was given by Duncan Watson-Parris from the University of Oxford on using ML to identify ship tracks from satellite imagery. These initial talks immediately got us thinking about the different ways in which ML could be used within the realm of earth science. The second day we also had talks from ESA’s Φ-lab, on a whole host of different uses for AI in earth observation.  

To begin our ML training, members of NEODAAS (NERC Earth Observation Data Acquisition and Analysis Service), David Moffat and Katie Awty-Carroll, led us through an introduction into ML and AI, its uses and importance in modern scientific context. The graphic below – presented by David and Katie – makes a digestible distinction between some commonly conflated terms in the subject area: 

AI vs ML vs DL* 

The discussion on the limitations and ‘bottlenecks’ of ML was of particular interest, it highlighted the numerous considerations to be made when developing an ML solution. For example, the subset of data used to train a model should ideally be representative of the entire system, avoiding or at least acknowledging the potential biases introduced by: human-preferences in selecting and filtering training data; the method of data collection method; the design of the ML techniques used; and how we interpret the outputs. While this may seem obvious at first, it is certainly not trivial. There are high-profile and hotly-debated examples of AI being used in the real-world where biases have led to significant human-affecting consequences. (https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology) (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6875681

We were prompted to consider these ethical questions and the efficacy of ML in the context of earth science: Which problems does ML help us solve and, perhaps more importantly, which problems are we willing to entrust it with? 

A fun exercise you can try for yourself: search for images of a given profession in your search engine of choice. See if you can identify any patterns or biases in what may have been included or even excluded from the selected results!  

We then began the practical sessions which all fell within the broad umbrella of ML. This required a slight mindset shift from traditional programming as, even from a top-down perspective, the way we approached problems was completely different: 

Figure: Traditional Programming vs Machine Learning* 

We were given jupyter notebooks to work on three separate practical’s; random forest classification, neural network for regression, and convolutional neural networks. Each showed a different application and use-case of ML, giving us more ideas on how it could potentially be implemented into our own research. Adjacent to this, we were given a workflow task to think about over the two days: how could we use ML in our own projects? At the end of the second day we each presented our ideas and were given feedback. This helped ground the talks with an ongoing focus to relate new knowledge back to our own varied fields; allowing the workshop to elegantly handle the variety and promote the actual use of the skills in our own work. 

The forum was academically challenging but it was also great fun! Surrounding the concentrated days of learning, the forum offered us plenty of chances to connect with others. We were given a tour of the Space Park, an impressive space you could say was out of this world. The evening activities, bowling and shuffleboard, had a great atmosphere too! 

By the end of the event, the interest and enthusiasm of the attendees had been effectively transformed into new understanding and conversation – which is unsurprising considering the increasing relevance ML is gaining in the field of earth science. Further to this, making connections over the pandemic has been difficult, so we felt extremely fortunate that we were able to meet in person. 

Laura – The forum was an exciting insight into a field I had no experience in. Although my  immediate work is focused on the application of data assimilation to ocean measurements, which does not directly relate to ML at the moment, data assimilation has high potential to overlap with ML .The forum has furthered my understanding of fields that surround the focal point of my research. In turn, this has helped me gain a more well-rounded knowledge base, opening doors to new directions my research could take. 

Ieuan – The forum has certainly given me many new avenues to explore when approaching the intended application of ML in my work, perhaps starting simple with a neural network for multivariate regression and expanding from there. The hands-on practicals were a valuable opportunity for practice and a great chance for some informal discussion on the details of ML implementation with my peers. Moreover, the event has equipped us with the skills to effectively engage with other academics when they present ML-based work – which is something I would love to do in future events!  

We both hope there will be more NCEO workshops like this in the future, perhaps an event or meetup that focuses on the intersection of ML and data assimilation, as these topics resonate with us both. We’d like to thank the NEODAAS staff from PML for leading the training and Uzma Saeed for organising the forum. It was a fun and engaging experience that we are grateful to have taken part in and we would encourage anyone with the opportunity to learn about ML to do so! 

* Graphics were provided by the NEODAAS slides used at the NCEO forum 

MeteoXchange 

Supporting International Collaboration for Early Career Researchers 

James Fallon – j.fallon@pgr.reading.ac.uk 
 

What is it? 

Due to lockdowns and travel restrictions since 2020, networking opportunities in science have been transformed. We can expect to see a mix of virtual and hybrid elements persist into the future, offering both cost-saving and carbon-saving benefits. 

The MeteoXchange project aims to become a new platform for young atmospheric scientists from all over the world, providing networking opportunities and platforms for collaboration. The project is an initiative of German Federal Ministry of Education and Research, and research society Deutsche Forschungsgesellschaft. Events are conducted in English, and open to young scientists anywhere. 

ECS Conference 

This year marked the first ever MeteoXchange conference, which took place online in March 2022. The ECS (early career scientists) conference took place over two days, on gather.town. An optional pre-conference event gave the opportunity for new presenters to work on presentation skills and receive feedback at the end of the main conference. 

Figure 1: Conference Schedule, including a keynote on Machine Learning and Earth System Modelling, movie night, and presenter sessions. 

Five presenter sessions were split over two days, with young scientists sharing their research to a conference hall on the virtual platform gather.town. Topics ranged from lidar sensing and reanalysis datasets, to cloud micro-physics and UV radiation health impacts. I really enjoyed talks on the attribution of ‘fire weather’ to climate change, and machine learning techniques for thunderstorm forecasting! The first evening concluded with a screening of documentary Picture a Scientist

During the poster session on the second day, I presented my research poster to different scientists walking by my virtual poster board. Posters were designed to mimic the large A2 printouts seen at in-person events. Two posters that really stood out were a quantification of SO2 emissions from Kilauea volcano in Hawaii, and an evaluation of air quality in Cuba’s Mariel Bay using meteorological diagnostic models combined with air dispersion modelling. 

Anticipating that it might be hard to communicate on the day, I added a lot of text to my poster. However, I needn’t have worried as the virtual platform worked flawlessly for conducting poster Q&A – the next time I present on a similar platform I will try to avoid using as much text and instead focus on a more traditional layout! 

Figure 2: During the poster session, I presented my research on Reserve-Power systems – energy-volume requirements and surplus capacity set by weather events. 

By the conference end, I got the impression that everyone had really enjoyed the event! Awards were given for the winners of the best posters and talks. The ECS conference was fantastically well organised by Carola Detring (DWD) and Philipp Joppe (JGU Mainz), and a wonderful opportunity to meet researchers from around the world. 

MeteoMeets 

Since July 2021, MeteoXchange have held monthly meetups, predominantly featuring lecturers and professors who introduce research at their institute for early career scientists in search of opportunities! 

The opportunities shared at MeteoMeets are complemented by joblists and by the MeteoMap: https://www.meteoxchange.de/meteomap. The MeteoMap lists PhD and postdoc positions across Germany, neatly displayed with different markers depending on the type of institute. This resource is currently still under construction. 

Figure 3: The MeteoMap features research opportunities in Germany, available for early career researchers from across the world. 

Travel Grants 

One of the most exciting aspects of the MeteoXchange project is the opportunity for international collaboration with travel grants! 

The travel funds offered by MeteoXchange are for two or more early career scientists in the field of atmospheric sciences. Students must propose a collaborative project, which aims to spark future work and networking between their own institutions. If the application is successful, students have the opportunity to access 2,500€ for travel funds.  

Over the last two weeks of April, I will be collaborating with KIT student Fabian Mockert  on “Dunkelflauten” (periods of low-renewable energy production, or “dark wind lulls”). Dunkelflauten, especially cold ones, result in high electricity load on national transmission networks, leading to high costs and potentially cause a failure of a fully renewable power system doi.org/10.1038/nclimate3338. We are collaborating to use power system modelling to better understand how this stress manifests itself. Fabian will spend two weeks visiting the University of Reading campus, meeting with students and researchers from across the department. 

Get Involved 

The 2022 travel grant deadline has already closed; however, it is hoped that MeteoXchange will receive funding to continue this project into future years, supporting young researchers in collaboration and idea-exchange. 

To get involved with the MeteoMeets, and stay up to date on MeteoXchange related opportunities, signup to the mailing list

COP Climate Action Studio 2021 and a visit to the Green Zone, Glasgow  

Helen Hooker h.hooker@pgr.reading.ac.uk 

Introduction 

SCENARIO DTP and the Walker Academy offered PhD students the opportunity to take part in the annual COP Climate Action Studio (COPCAS) 2021. COPCAS began with workshops on the background of COP, communication and interviewing skills and an understanding of the COP26 themes and the (massive!) schedule. James Fallon and Kerry Smith were ‘on the ground’ in the Blue Zone, Glasgow in week 1 of COP26, followed by Gwyn Matthews and Jo Herschan during week 2. Interviews were arranged between COP26 observers, and COPCAS participants back in Reading who were following COP26 events in small groups through livestream. Students summarised the varied and interesting findings by writing blog posts and engaging with social media.

Figure 1: COPCAS in action.   

Motivation, training and week 1 

Personally, I wanted to learn more about the COP process and to understand climate policy implementation and action (or lack thereof). I was also interested to learn more about anticipatory action and forecast based financing, which relate to my research. After spending 18 months working remotely in my kitchen, I wanted to meet other students and improve formulating and asking questions! I found the initial training reassuring in many ways, especially finding out that so many people have dedicated themselves to drive change and find solutions. During the first week of COP26 we heard about so many positive efforts to combat the climate crisis from personal actions to community schemes, and even country wide ambitious projects such as reforestation in Costa Rica. A momentum seemed to be building with pledges to stop deforestation and to reduce methane emissions.

Green Zone visit 

Figure 2: Green Zone visit included a weekend full of exhibitors, talks, films and panel discussions plus a giant inflatable extracting COvia bouncing!

During the middle weekend of COP26, some of us visited the Green Zone in Glasgow. This was a mini version of the Blue Zone open to the public and offered a wide variety of talks and panel discussions. Stand out moments for me: a photograph of indigenous children wearing bamboo raincoats, measuring the length of Judy Dench’s tree, the emotive youth speakers from Act4Food Act4Change and the climate research documentary Arctic Drift where hundreds of scientists onboard a ship carried out research whilst locked into the polar winter ice-flow.  

COPCAS Blog 

During COPCAS I wrote blogs about: a Green Zone event from Space4climate, an interview by Kerry Smith with SEAChange (a community-based project in Aberdeenshire aiming to decarbonise old stone buildings) and Sports for climate action. I also carried out an interview arranged by Jo with WWF on a food systems approach to tackling climate change.

Ultimately though, the elephant in the large COP26 Blue Zone room had been there all along…

Interview with Anne Olhoff, Emissions Gap Report (EGR) 2021 Chief scientific editor and Head of Strategy, Climate Planning and Policy, UNEP DTU Partnership.

Figure 3: Source: UNEP Emissions Gap Report 2021 updated midway through week two of COP26 accounting for new pledges. 

Time is running out, midway through the second week of COP26, the United Nations Environmental Partnership (UNEP) presented its assessment on the change to global temperature projections based on the updated pledges so far agreed in Glasgow.  

Pledges made prior to COP26 via Nationally Determined Contributions (NDCs) put the world on track to reach a temperature increase of 2.7C by the end of the century. To keep the Paris Agreement of keeping warming below 1.5C this century, global greenhouse gas emissions must be reduced by 55% in the next eight years. At this point in COP26, updated pledges now account for just an 8% reduction – this is 7 times too small to keep to 1.5C and 4 times too small to keep to 2C. Updated projections based on COP26 so far now estimate a temperature rise of 2.4C by 2100. Net-zero pledges could reduce this by a further 0.5C, however plans are sketchy and not included in NDCs. So far just five of the G20 countries are on a pathway to net-zero.

Anne’s response regarding policy implementation in law: 

“Countries pledge targets for example for 2030 under the UN framework for climate change and there’s no international law to enforce them, at least not yet. Some countries have put net-zero policies into law, which has a much bigger impact as the government can be held accountable for the implementation of their pledges.” 

Following my own shock at the size of the emissions gap, I asked Anne if she feels there has been any positive changes in recent years: 

“I do think we have seen a lot of change, actually…the thing is, things are not moving as fast as they should. We have seen change in terms of the commitment of countries and the policy development and development in new technology needed to achieve the goals, these are all positive developments and here now, changing the whole narrative, just 2 years ago no one would have thought we’d have 70 countries setting net-zero emission targets…we are also seeing greater divergence between countries, between those making the effort to assist the green transition such as the UK, EU and others, and those further behind the curve such as China, Brazil and India. It’s important to help these countries transition very soon, peaking emissions and rapidly declining after that.”   

I asked Anne how countries on track can support others: 

“A lot of the great things here (at COP) is to strengthen that international collaboration and sharing of experiences, it’s an important function of the COP meeting, but we need to have the political will and leadership in the countries to drive this forward.” 

Summary 

The momentum that was apparent during the first week of COP26 seemed to have stalled with this update. Despite the monumental effort of so many scientists, NGOs, individuals and those seeking solutions from every conceivable angle, the pledges made on fossil fuel reduction are still so far from what is needed. And at the final hour (plus a day), the ambition to ‘phaseout’ burning coal was changed to ‘phasedown’ and the financial contributions from developed nations pledged to cover loss and damage to countries not responsible for, but impacted now by climate change, have not been realised. I think this is the first time I have really felt the true meaning of ‘climate justice’. Perhaps we do need a planet law, as it seems our political leaders, do not have the will.

Overall, the COPCAS experience has been enjoyable, slightly overwhelming and emotional! It has been great to work together and to share the experiences of those in the Blue zone. It was also an amazing learning experience; I think I have barely touched the surface of the entire COP process and I would still like to understand more.

Connecting Global to Local Hydrological Modelling Forecasting – Virtual Workshop

Gwyneth Matthews g.r.matthews@pgr.reading.ac.uk
Helen Hooker h.hooker@pgr.reading.ac.uk 

ECMWF- CEMS – C3S – HEPEX – GFP 

What was it? 

The workshop was organised under the umbrella of ECMWF, the Copernicus services CEMS and C3S, the Hydrological Ensemble Prediction EXperiment (HEPEX) and the Global Flood Partnership (GFP). The workshop lasted 3 days, with a keynote speaker followed by Q&A at the start of each of the 6 sessions. Each keynote talk focused on a different part of the forecast chain, from hybrid hydrological forecasting to the use of forecasts for anticipatory humanitarian action, and how the global and local hydrological scales could be linked. Following this were speedy poster pitches from around the world and poster presentations and discussion in the virtual ECMWF (Gather.town).  

Figure 1: Gather.town was used for the poster sessions and was set up to look like the ECMWF site in Reading, complete with a Weather Room and rubber ducks. 

What was your poster about? 

Gwyneth – I presented Evaluating the post-processing of the European Flood Awareness System’s medium-range streamflow forecasts in Session 2 – Catchment-scale hydrometeorological forecasting: from short-range to medium-range. My poster showed the results of the recent evaluation of the post-processing method used in the European Flood Awareness System. Post-processing is used to correct errors and account for uncertainties in the forecasts and is a vital component of a flood forecasting system. By comparing the post-processed forecasts with observations, I was able to identify where the forecasts were most improved.  

Helen – I presented An evaluation of ensemble forecast flood map spatial skill in Session 3 – Monitoring, modelling and forecasting for flood risk, flash floods, inundation and impact assessments. The ensemble approach to forecasting flooding extent and depth is ideal due to the highly uncertain nature of extreme flooding events. The flood maps are linked directly to probabilistic population impacts to enable timely, targeted release of funding. The Flood Foresight System forecast flood inundation maps are evaluated by comparison with satellite based SAR-derived flood maps so that the spatial skill of the ensemble can be determined.  

Figure 2: Gwyneth (left) and Helen (right) presenting their posters shown below in the 2-minute pitches. 

What did you find most interesting at the workshop? 

Gwyneth – All the posters! Every session had a wide range of topics being presented and I really enjoyed talking to people about their work. The keynote talks at the beginning of each session were really interesting and thought-provoking. I especially liked the talk by Dr Wendy Parker about a fitness-for-purpose approach to evaluation which incorporates how the forecasts are used and who is using the forecast into the evaluation.  

Helen – Lots! All of the keynote talks were excellent and inspiring. The latest developments in detecting flooding from satellites include processing the data using machine learning algorithms directly onboard, before beaming the flood map back to earth! If openly available and accessible (this came up quite a bit) this will potentially rapidly decrease the time it takes for flood maps to reach both flood risk managers dealing with the incident and for use in improving flood forecasting models. 

How was your virtual poster presentation/discussion session? 

Gwyneth – It was nerve-racking to give the mini-pitch to +200 people, but the poster session in Gather.town was great! The questions and comments I got were helpful, but it was nice to have conversations on non-research-based topics and to meet some of the EC-HEPEXers (early career members of the Hydrological Ensemble Prediction Experiment). The sessions felt more natural than a lot of the virtual conferences I have been to.  

Helen – I really enjoyed choosing my hairdo and outfit for my mini self. I’ve not actually experienced a ‘real’ conference/workshop but compared to other virtual events this felt quite realistic. I really enjoyed the Gather.town setting, especially the duck pond (although the ducks couldn’t swim or quack! J). It was great to have the chance talk about my work and meet a few people, some thought-provoking questions are always useful.  

CMIP6 Data Hackathon

Brian Lo – brian.lo@pgr.reading.ac.uk 

Chloe Brimicombe – c.r.brimicombe@pgr.reading.ac.uk 

What is it?

A hackathon, from the words hack (meaning exploratory programming, not the alternate meaning of breaching computer security) and marathon, is usually a sprint-like event where programmers collaborate intensively with the goal of creating functioning software by the end of the event. From 2 to 4 June 2021, more than a hundred early career climate scientists and enthusiasts (mostly PhDs and Postdocs) from UK universities took part in a climate hackathon organised jointly by Universities of Bristol, Exeter and Leeds, and the Met Office. The common goal was to quickly analyse certain aspects of Climate Model Intercomparison Project 6 (CMIP6) data to output cutting-edge research that could be worked into a published material and shown in this year’s COP26. 

Before the event, attendees signed up to their preferred project from a choice of ten. Topics ranged from how climate change will affect migration of arctic terns to the effects of geoengineering by stratospheric sulfate injections and more… Senior academics from a range of disciplines and institutions led each project. 

Group photo of participants at the CMIP6 Data Hackathon

How is this virtual hackathon different to a usual hackathon? 

Like many other events this year, the hackathon took place virtually, using a combination of video conferencing (Zoom) for seminars and teamwork, and discussion forums (Slack). 

Brian: 

Compared to two 24-hour non-climate related hackathons I previously attended, this one was spread out for three days, so I managed not to disrupt my usual sleep schedules! The experience of pair programming with one or two other team members was as easy, since I shared one of my screens on Zoom breakout rooms throughout the event. What I really missed were the free meals, plenty of snacks and drinks usually on offer at normal hackathons to keep me energised while I programmed. 

Chloe:

I’ve been to a climate campaign hackathon before, and I did a hackathon style event to end a group project during the computer science part of my undergraduate; we made the boardgame buccaneer in java. But this was set out completely differently. And, it was not as time intensive as those which was nice. I missed not being in a room with those you are on a project with and still missing out on free food – hopefully not for too much longer. But we made use of Zoom and Slack for communication. And Jasmin and the version control that git offers with individuals working on branches and then these were merged at the end of the hackathon. 

What did we do? 

Brian: 

Project 2: How well do the CMIP6 models represent the tropical rainfall belt over Africa? 

Using Gaussian parameters in Nikulin & Hewitson 2019 to describe the intensity, mean meridional position and width of the tropical rainfall belt (TRB), the team I was in investigated three aspects of CMIP6 models for capturing the Africa TRB, namely the model biases, projections and whether there was any useful forecast information in CMIP6 decadal hindcasts. These retrospective forecasts were generated under the Decadal Climate Prediction Project (DCPP), with an aim of investigating the skill of CMIP models in predicting climate variations from a year to a decade ahead. Our larger group of around ten split ourselves amongst these three key aspects. I focused on aspect of CMIP6 decadal hindcasts, where I compared different decadal models at different model lead times with three observation sources. 

Chloe: 

Project 10: Human heat stress in a warming world 

Our team leader Chris had calculated the universal thermal climate index (UTCI) – a heat stress index for a bunch of the CMIP6 climate models. He was looking into bias correction against the ERA5 HEAT reanalysis dataset whilst we split into smaller groups. We looked at a range of different things from how the intensity of heat stress changed to how the UTCI compared to mortality. I ended up coding with one of my (5) PhD supervisors Claudia Di Napoli and we made amongst other things the gif below.  

https://twitter.com/ChloBrim/status/1400780543193649153
Annual means of the UTCI for RCP4.5 (medium emissions) projection from 2020 to 2099.

Would we recommend meteorology/climate-related hackathon? 

Brian: 

Yes! The three days was a nice break from my own radar research work. The event was nevertheless good training for thinking quickly and creatively to approach research questions other than those in my own PhD project. The experience also sharpened my coding and data exploration skills, while also getting the chance to quickly learn advanced methods for certain software packages (such as xarray and iris). I was amazed at the amount of scientific output achieved in only three short days! 

Chloe: 

Yes, but also make sure if it’s online you block out the time and dedicate all your focus to the hackathon. Don’t be like me. The hackathon taught me more about python handling of netcdfs, but I am not yet a python plotting convert, there are some things R is just nicer for. And I still love researching heat stress and heatwaves, so that’s good!  

We hope that the CMIP hackathon runs again next year to give more people the opportunity to get involved. 

ECMWF/EUMETSAT NWP SAF Workshop on the treatment of random and systematic errors in satellite data assimilation for NWP

Devon Francis – d.francis@pgr.reading.ac.uk

The ECMWF/EUMETSAT NWP SAF Workshop (European Centre for Medium-Range Weather Forecasts/European Organisation for the Exploitation of Meteorological Satellites Numerical Weather Prediction Satellite Application Facilities Workshop) was originally to be held at the ECMWF centre in Reading, but as with everything else in 2020 was moved online. The workshop was designed to be a place to share new ideas and theories for dealing with errors in satellite data assimilation: encompassing the treatment of random errors; biases in observations; and biases in the model.

Group photo of attendees of ECMWF/EUMETSAT NWP SAT Workshop – Virtual Event: ECMWF/EUMETSAT NWP SAF Workshop on the treatment of random and systematic errors in satellite data assimilation for NWP.

It was held over four days: consisting of oral and poster presentations; panel discussions; and concluded on the final day with the participants split into groups to discuss what methods are currently in use and what needs to be addressed in the future.

Oral Presentations

The oral presentations were split into four sessions: scene setting talks; estimating uncertainty; correction of model and observation biases; and observation errors. The talks were held over Zoom for the main presenters and shown via a live broadcast on the workshop website. This worked well as the audience could only see the individual presenter and their slides, without having the usual worry of checking that mics and videos were off for other people in the call!

Scene Setting Talks

I found the scene setting talks by Niels Bormann (ECMWF) and Dick Dee (Joint Center for Satellite Data Assimilation – JCSDA) very useful as they gave overviews of observation errors and biases respectively: both explaining the current methods as well as the evolution of different methods over the years. Both Niels and Dick are prominent names amongst data assimilation literature, so it was interesting to hear explanations of the underlying theories from the experts in the field before moving onto the more focused talks later in the day.

Correction of Model and Observation Biases

The session about the correction of model and observation biases, was of particular interest to me as it discussed many new theoretical methods for disentangling model and observation biases which are beginning to be used in operational NWP.

The first talk by Patrick Laloyaux (ECMWF) was titled Estimation of Model Biases and Importance of Scale Separation and looked at weak-constraint 4D-Var: a variational bias correction technique that includes an error term in the model, such that solving the cost function involves varying three variables: the state; the observation bias correction coefficient; and the model error. When the background and model errors have different spatial scales and when there are sufficient reference observations, it has been shown in a simplified model that weak-constraint 4D-Var can accurately correct model and initial state errors. They argue that the background error covariance matrix contains small spatial scales, and the model error covariance matrix contains large spatial scales, which means that the errors can be disentangled in the system. However, without this scale difference, separating the errors would be much harder, so this method can only be considered when there are vast differences within the spatial scales.

On the other hand, the talk by Mark Buehner (Environment and Climate Change Canada) discussed an offline technique that performs 3D-Var analysis every six hours using only unbiased, also known as “anchor”, observations to reduce the effects of model bias. These analyses can then be used as reference states in the main 4D-EnVar assimilation cycle to estimate the bias in the radiance observations. This method was much discussed over the course of the workshop, as it is yet to be used operationally and was very interesting to see a completely different bias correction technique to tackle the problem of disentangling model and observation biases.  

Posters

Poster presentations were shown via individual pages on the workshop website, with a comments section for small questions and virtual rooms – where presenters were available for a set two hours over the week. There were 12 poster presentations available, ranging from the theoretical statistics behind errors as well as operational techniques to tackle these errors.

My poster, focused on figures 1 and 2 which show the scalar state analysis error variances when (1) we vary the state background error variance accuracy for (a) underestimating and (b) overestimating the bias background error variance; (2) we vary the bias background error variance accuracy for (a) underestimating and (b) overestimating the state background error variance.

I presented a poster on work that I had been focussing on for the past few months titled Sensitivity of VarBC to the misspecification of background error covariances. My work focused on the effects of wrongly specifying the state and bias background error covariances on the analysis error covariances for the state and the bias. This was the first poster that I had ever presented so was a fast learning curve in how to clearly present detailed work in an aesthetic way. It was a useful experience as it gave me a hard deadline to conclude my current work and I had to really think about my next steps as well as why my work was important. Presenting online was a very different experience to presenting in person as it involved a lot of waiting around in a virtual room by myself, but when people did come, I was able to have some useful conversations, as well as the added bonus of being able to share my screen to share relevant papers.

Working Groups

On the final day we split ourselves into four working groups to discuss two different topics: the treatment of biases and the treatment of observation errors. The goal was to discuss current methods, as well as what we thought needed to be researched in the future or potential challenges that we would come across. This was hosted via the BlueJeans app, which provided a good space to talk as well as share screens and had the useful option to choose the ratio of viewing people’s videos, to viewing the presenter’s screen. Although I wasn’t able to contribute much, this was a really interesting day as I was able to listen to views of the experts in the field and listen to their discussions on what they believed to be the most important current issues, such as increasing discussion between data centres receiving the data and numerical weather prediction centres assimilating the data; and disentangling biases from different sources. Unfortunately for me, some of them felt that we were focussing too much on the theoretical statistics behind NWP and not enough on the operational testing, but I guess that’s experimentalists for you!

Final Thoughts

Although I was exhausted by the end of the week, the ECMWF/EUMETSAT NWP SAF Workshop was a great experience and I would love to attend next time, regardless of whether it is virtual or in person. As much as I missed the opportunity to talk to people face to face, the organisers did a wonderful job of presenting the workshop online and there were many opportunities to talk to the presenters. There were also some benefits of the virtual workshop: people from across the globe could easily join; the presentations were recorded, so can easily be re-watched (all oral and poster presentations can be found via this link – https://events.ecmwf.int/event/170/overview); and resource sharing was easy via screen sharing. I wonder whether future workshops and conferences could be a mixture of online as well as in person, in order to get the best of both worlds? I would absolutely recommend this workshop, both for people who are just starting out in DA as well as for researchers with years of experience, as it encompassed presentations from big names who have been working in error estimation for many years as well as new presenters and new ideas from worldwide speakers.

Workshop on Predictability, dynamics and applications research using the TIGGE and S2S ensembles

Email: s.h.lee@pgr.reading.ac.uk

From April 2nd-5th I attended the workshop on Predictability, dynamics and applications research using the TIGGE and S2S ensembles at ECMWF in Reading. TIGGE (The International Grand Global Ensemble, formerly THORPEX International Grand Global Ensemble) and S2S (Sub-seasonal-to-Seasonal) are datasets hosted at primarily at ECMWF as part of initiatives by the World Weather Research Programme (WWRP) and the World Climate Research Programme (WCRP). TIGGE has been running since 2006 and stores operational medium-range forecasts (up to 16 days) from 10 global weather centres, whilst S2S has been operational since 2015 and houses extended-range (up to 60 days) forecasts from 11 different global weather centres (e.g. ECMWF, NCEP, UKMO, Meteo-France, CMA…etc.). The benefit of these centralised datasets is their common format, which enables straightforward data requests and multi-model analysis with minimal data manipulation allowing scientists to focus on doing science!

Attendees of the workshop came from around the world (not just Europe) although there was a particularly sizeable cohort from Reading Meteorology and NCAS.

Figure 1: Workshop group photo featuring the infamous ECMWF ducks!

In my PhD so far, I have been making extensive use of the S2S database – looking at both operational and re-forecast datasets to assess stratospheric predictability and biases – and it was rewarding to attend the workshop and see what a diverse range of applications the datasets have across the world. From the oceans to the stratosphere, tropics to poles, predictability mathematics to farmers and energy markets, it was immediately very clear that TIGGE and S2S are wonderfully useful tools for both the research and applications communities. A particular aim of the workshop was to discuss “user-oriented variables” – derived variables from model output which represent the meteorological conditions to which a user is sensitive (such as wind speed at a specific height for wind power applications).

The workshop mainly consisted of 15-minute conference-style talks in the main lecture theatre and poster sessions, but the final two days also featured parallel working group sessions of about 15 members each. The topics discussed in the working groups can be found here. I was part of working group 4, and we discussed dynamical processes and ensemble diagnostics. We reflected on some of the points raised by speakers over the preceding days – particular attention was given to diagnostics needed to understand dynamical effects of model biases (such as their influence on Rossby wave propagation and weather-regime transition) alongside what other variables researchers needed to make full use of the potentials S2S and TIGGE offer (I don’t think I could say “more levels in the stratosphere!” loudly enough – TIGGE does not go above 50 hPa, which is not useful when studying stratospheric warming events defined at 10 hPa).

Data analysis tools are also becoming increasingly important in atmospheric science. Several useful and perhaps less well-known tools were presented at the workshop – Mio Matsueda’s TIGGE and S2S museum websites provide a wide variety of pre-prepared plots of variables like the NAO and MJO which are excellent for exploratory data analysis without needing many gigabytes of data downloads. Figure 2 shows an example of NAO forecasts from S2S data – the systematic negative NAO bias at longer lead-times was frequently discussed during the workshop, whilst the inability to capture the transition to a positive NAO regime beginning around February 10th is worth further analysis. In addition to these, IRI’s Data Library has powerful abilities to manipulate, analyse, plot, and download data from various sources including S2S with server-side computation.


Figure 2: Courtesy of the S2S Museum, this figure shows S2S model forecasts of the NAO launched on January 31st 2019. The verifying scenario is shown in black, with ensemble means in grey. All models exhibited a negative ensemble-mean bias and did not capture the development of a positive NAO after February 10th.

It’s inspiring and motivating to be part of the sub-seasonal forecast research community and I’m excited to present some of my work in the near future!

TIGGE and S2S can be accessed via ECMWF’s Public Datasets web interface.