Climate Resilience Evidence Synthesis Training 

Lily Greig – l.greig@pgr.reading.ac.uk 

The Walker Academy, the capacity strengthening arm of the Walker Institute, based at the University of Reading, holds a brilliant week-long training course every year named (Climate Resilience Evidence Synthesis Training (CREST). The course helps PhD students from all disciplines to understand the role of academic research within wider society. I’m a third year PhD student studying ocean and sea ice interaction, and I wanted to do the course because I’m interested in understanding how to better communicate scientific research, and the process of how research is used to inform policy. The other students who participated were mainly from SCENARIO or MPECDT, studying a broad range of subjects from Agriculture to Mathematics.  

The Walker Institute  

The Walker Institute is an interdisciplinary research institute supporting the development of climate resilient societies. Their research relates to the impacts of climate variability, which includes social inequality, conflict, migration and loss of biodiversity. The projects at Walker involve partnership with communities in low-income countries to increase climate resilience on the ground. 

The institute follows a system-based approach, in which project stakeholders (e.g., scientists, village duty bearers, governments and NGOs) collaborate and communicate continuously, with the aim of making the best decisions for all. Such an approach allows, for example, communities on the ground (such as a village in North East Ghana affected by flooding) to vocalise their needs or future visions, meaning scientific research performed by local or national Meteorological agencies can be targeted and communicated according to those specific needs. Equally, with such a communication network, governments are able to understand how best to continually enforce those connections between scientists and farmers, and to make the best use of available resources or budgets. This way, the key stakeholders form part of an interacting, constantly evolving complex system. 

Format and Activities 

The course started off with introductory talks to the Walker’s work, with guest speakers from Malawi (Social Economic Research and Interventions Development) and Vietnam (Himalayan University Consortium). On the second day, we explored the topic of communication in depth, which included an interactive play, based on a negotiation of a social policy plan in Senegal. The play involved stepping on stage and improvising lines ourselves when we spotted a problem in negotiations. An example of this was a disagreement between two climate scientists and the social policy advisor to the President- the scientists knew that rainfall would get worse in the capital, but the social scientist understood that people’s livelihoods were actually more vulnerable elsewhere. Somebody stepped in and helped both characters understand that the need for climate resilience was more widespread than each individual character had originally thought.  

Quick coffee break after deciphering the timeline of the 2020 floods in North East Ghana.

The rest of the week consisted of speedy group work on our case study of increasing climate resilience to annual flood disasters in North East Ghana, putting together a policy brief and presentation. We were each assigned a stakeholder position, from which we were to propose future plans. Our group was assigned the Ghanaian government. We collected evidence to support our proposed actions (for example, training Government staff on flood action well in advance of a flood event, as not as an emergency response) and built a case for why those actions would improve people’s livelihoods. 

Alongside this group work, we had many more valuable guest speakers. See the full list of guest speakers below. Each guest gave their own unique viewpoint of working towards climate resilience. 

List of guest speakers 

Day 1: Chi Huyen Truong: Programme Coordinator Himalayan University Consortium, Mountain Knowledge and Action Networks 

Day 1: Stella Ngoleka: Country Director at Social Economic Research and Interventions Development – SERID and HEA Practitioner  

Day 2: Hannah Clark: Open Source Farmer Radio Development Manager, Lorna Young Foundation 

Day 2: Miriam Talwisa: National Coordinator at Climate Action Network-Uganda 

Day 3: panel speakers:  

Irene Amuron: Program Manager, Anticipatory Action at Red Cross Red Crescent Climate Centre 

Gavin Iley: International Expert, Crisis Management & DRR at World Meteorological Organization 

James Acidri: Former member of the Ugandan Parliament, Senio associate Evidence for Development 

Day 4: Tesse de Boer: Technical advisor in Red Cross Red Crescent Climate Centre 

Day 5: Peter Gibbs: Freelance Meteorologist & Broadcaster 

Course Highlights 

Everyone agreed that the interactive play was a highly engaging & unusual format, and one not yet encountered in my PhD journey! It allowed some of us to step right into the shoes of someone whose point of view you had potentially never stopped to consider before, like a government official or a media reporter… 

The 2022 CREST organisers and participants. Happy faces at the end of an enjoyable course!

Something else that really stayed with me was a talk given by the National Coordinator at Climate Action Network Uganda, Miriam Talwisa. She shared loads of creative ideas about how to empower climate action in small or low-income communities. These included the concept of community champions, media cafes, community dialogues, and alternative policy documentation such as citizens manifestos or visual documentaries. This helped me to think about my own local community and how such tools could be implemented to enforce climate action at the grassroots level.  

Takeaways  

An amazing workshop with a lovely and supportive team running it who built a real atmosphere. I took away a lot from the experience and I think the other students did too. It really helped us to think about our own research and our key stakeholders, and how reaching out to them is really important. 

Forecast Verification Summer School

Lily Greig – l.greig@pgr.reading.ac.uk

A week-long summer school on forecast verification was held jointly at the end of June by the MPECDT (Mathematics of Planet Earth Centre for Doctoral Training) and JWGFVR (Joint Working Group on Forecast Verification Research). The school featured lectures from scientists and academics from many different countries around the world including Brazil, USA and Canada. They each specialised in different topics within forecast verification. Participants gained a large overview of the field and how the fields within it interact.

Structure of school

The virtual school consisted of lectures from individual members of the JWGFVR on their own subjects, along with drop-in sessions for asking questions and dedicated time to work on group projects. Four groups of 4-5 students were given an individual forecast verification challenge. The themes of the projects were precipitation forecasts, comparing high resolution global model and local area model wind speed forecasts, and ensemble seasonal forecasts. The latter was the topic of our project.

Content

The first lecture was given by Barbara Brown, who provided a broad summary of verification and gave examples of questions that verifiers may ask themselves as they attempt to assess the “goodness” of a forecast. The next day, a lecture by Barbara Casati covered continuous scores (verification of continuous variables e.g., temperature), such as linear bias, mean-squared error (MSE) and Pearson coefficient. She also outlined the deficits of different scores and how it is best to use a variety of them when assessing the quality of a forecast. Marion Mittermaier then spoke about categorical scores (yes/no events or multi category events such as precipitation type). She gave examples such as contingency tables which portray how well a model is able to predict a given event, based on hit rates (how often the model predicted an event when the event happened), and false alarm rates (how often the model predicted the event when it didn’t happen). Further lectures were given by Ian Joliffe on methods of determining the significance of your forecast scores, Nachiketa Acharya on probabilistic scores and ensembles, Caio Coelho on sub-seasonal to seasonal timescales, and then Raghavendra Ashrit, Eric Gilleland and Caren Marzban on severe weather, spatial verification and experimental design. The lectures have been made available online and you can find them here.

Forecast Verification

So, forecast verification is as it sounds: a part of assessing the ‘goodness’ of a forecast as opposed to its value. Verification is helpful for economic purposes (e.g. decision making), as well as administrative and scientific ones (e.g. identifying model flaws). The other aspect of measuring how well a forecast is performing is knowing the user’s needs, and therefore how to apply the forecast. It is important to consider the goal of your verification process beforehand, as it will outline your choice of metrics and your assessment of them. An example of how forecast goodness hinges on the user was given by Barbara in her talk: a precipitation forecast may have a spatial offset of where a rain patch falls, but if both observation and forecast fall along the flight path, this may be all the aviation traffic strategic planner needs to know. For a watershed manager on the ground, however, this would not be a helpful forecast. The lecturers also emphasised the importance of performing many different measures on a forecast and then understanding the significance of your measures in order to help you understand its overall goodness. Identifying standards of comparison for your forecast is also important, such as persistence or climatology. Then there are further challenges such as spatial verification, which requires methods of ‘matching’ the location of your observations with the model predictions on the model grid.

Figure 1: Problem statement for group presentation on 2m temperature ensemble seasonal forecasts, presented by Ryo Kurashina

Group Project

Our project was on verification of 2 metre temperature ensemble seasonal forecasts (see Figure 1). We were looking at seasonal forecast data with a 1-month lead time for the summer months for three different models and investigating ways of validating the forecasts, finally deciding which one was the better. We decided to focus on the models’ ability to predict hot and cold events as a simple metric for El Nino. We looked at scatter plots and rank histograms to investigate the biases in our data, Brier scores for assessing model accuracy (level of agreement between forecast and truth) and Receiver Operating Characteristic curves to look model skill (the relative accuracy of the forecast over some reference forecast). The ROC curve (see Fig. 2) refers to the curve formed by plotting hit rates against false alarm rates based on probability thresholds. The further above the diagonal line your curve lies, the better your forecast is at discriminating events compared to a random coin toss. The combination of these verification methods were used to assess which model we thought was best.

Of course, virtual summer schools are less than ideal compared to the real (in person) deal, but with Teams meetings, shared code and chat channel we made the most of it. It was fun to work with everyone, even (or especially?) if the topic was new for all of us.

Figure 2: Presenting our project during group project presentations on Friday

Conclusions

The summer school was incredibly smoothly run, very engaging to people both new and experienced in the topic and provided plenty of opportunity to ask questions to the enthusiastic lecturers. Would recommend to PhD students working with forecasts and wanting to assess them!