When the Lakes Remember: Unravelling the Sudd Floods of 2022

By Douglas Mulangwa – d.mulangwa@pgr.reading.ac.uk

Between 2019 and 2024, East Africa experienced one of the most persistent high-water periods in modern history: a flood that simply would not recede. Lakes Victoria, Kyoga, and Albert all rose to exceptional levels, and the Sudd Wetland in South Sudan expanded to an unprecedented 163,000 square kilometres in 2022. More than two million people were affected across Uganda and South Sudan as settlements, roads, and farmland remained inundated for months.

At first, 2022 puzzled stakeholders, observers and scientists alike. Rainfall across much of the region was below average that year, yet flooding in the Sudd intensified. This prompted a closer look at the wider hydrological system. Conventional explanations based on local rainfall failed to account for why the water would not recede. The answer, it turned out, lay far upstream and more than a year earlier, hidden within the White Nile’s connected lakes and wetlands.

Figure 1: Map of the White Nile Basin showing delineated sub-catchments, lakes, major  rivers, and the Sudd Wetland extent. Sub-catchments are labelled numerically (1–15) with names listed in the legend. Observation stations (A–F) mark key hydrological data collection  locations used in this study: Lake Victoria (A), Lake Kyoga (B), River Nile at Masindi Port (C), Lake Albert (D), River Nile at Juba (E), and the Sudd Wetland (F). Background river networks and sub-catchment boundaries are derived from the HydroSHED dataset, and wetland extent is based on MODIS flood mask composites. The map is projected in geographic coordinates (EPSG:4326) with a graduated scale bar for accurate distance representation using UTM Zone 36N.

The White Nile: A Basin with Memory

The White Nile forms one of the world’s most complex lake, river, and wetland systems, extending from Lake Victoria through Lakes Kyoga and Albert into the Sudd. Hydrologically, it is a system of connected reservoirs that store, delay, and gradually release floodwaters downstream.

For decades, operational planning assumed that floodwaters take roughly five months to travel from Lake Victoria to the Sudd. That estimate was never actually tested with data; it originated as a rule of thumb based on Lake Victoria annual maxima in May and peak flooding in South Sudan in September/October.

Our recent study challenged that assumption. By combining daily lake-level and discharge data (1950–2024) with CHIRPS rainfall and MODIS flood-extent records (2002–2024), we tracked how flood peaks propagated through the system, segment by segment. Using an automated peak-matching algorithm, we quantified the lag between successive annual maxima peaks in Lake Victoria, Lake Kyoga, Lake Albert, and the Sudd Wetland.

The unprecedented high-water regime of 2019-2024

Figure 2: Lake Victoria water levels (1950–2024) and Sudd Wetland extents (2002–2024), with the 2019–2024 anomalous period shown in dark blue and earlier observations in black. The orange dotted line marks the pre-2019 maximum, while the solid vermillion line denotes the highest peak observed during 2019–2024. The dashed magenta line represents the reconstructed 1878 Lake Victoria peak (1137.3 m a.s.l.) from Nicholson & Yin (2001). The shaded grey band highlights the 2022 flood year, when the Sudd reached its largest extent in the MODIS record.

Between 2019 and 2024, both Lake Victoria and the Sudd reached record levels. Lake Victoria exceeded its historic 1964 peak in 2020, 2021, and 2024, while the Sudd expanded to more than twice its previous maximum extent. Each year from 2019 to 2024 stayed above any pre-2019 record, revealing that this was not a single flood season but a sustained multi-year regime.

The persistence of the 2019–2024 high-water regime mirrors earlier basin-wide episodes, including the 1961–64 and 1870s floods, when elevated lake levels and wetland extents were sustained across multiple years rather than confined to a single rainy season.  However, the 2020s stand out as the most extensive amongst all the episodes since the start of the 20th century. These data confirm that both the headwaters and terminal floodplain remained at record levels for several consecutive years during 2019–2024, highlighting the unprecedented nature of this sustained high-water phase in the modern observational era.

2019–2024: How Multi-Year Rainfall Triggers Propagated a Basin-Wide Flood

The sequence of flood events began with the exceptionally strong positive Indian Ocean Dipole of 2019, which brought extreme rainfall across the Lake Victoria basin. This marked the first in a series of four consecutive anomalous rainfall seasons that sustained elevated inflows into the lake system. The October–December 2019 short rains were among the wettest on record, followed by above-normal rainfall in the March–May 2020 long rains, another wet short-rains season in late 2020, and continued high rainfall through early 2021. Together, these back-to-back wet seasons kept catchments saturated and prevented any significant drawdown of lake levels between seasons. Lake Victoria rose by more than 1.4 metres between September 2019 and May 2020, the highest increase since the 1960s, and remained near the 1960s historical maximum for consecutive years. As that excess water propagated downstream, Lakes Kyoga and Albert filled and stayed high through 2021. Even when regional rainfall weakened in 2022, these upstream lakes continued releasing stored water into the White Nile. The flood peak that reached the Sudd in 2022 corresponded closely to the 2021 Lake Victoria high-water phase.

This sequence shows that the 2022 disaster was not driven by a single rainfall event but by cumulative wetness over multiple seasons. Each lake acted as a slow reservoir that buffered and then released the 2019 to 2021 excess water, resulting in multi-year flooding that persisted long after rainfall had returned to near-normal levels.

Transit Time and Floodwave Propagation

Quantitative tracking showed that it takes an average of 16.8 months for a floodwave to travel from Lake Victoria to the Sudd. The fastest transmission occurs between Victoria and Kyoga (around 4 months), while the slowest and most attenuated segment lies between Albert and the Sudd (around 9 months).

This overturns the long-held assumption of a five-month travel time and reveals a system dominated by floodplain storage and delayed release. The 2019–2021 period showed relatively faster propagation because of high upstream storage, while 2022 exhibited the longest lag as the Sudd absorbed and held vast volumes of water. By establishing this timing empirically, the study offers a more realistic foundation for early-warning systems.

Figure 3: Lake Victoria, Lake Kyoga, and Lake Albert water levels, and Sudd Wetland inundated extent, from 2016 to 2024. Coloured spline curves indicate annual flood-wave trajectories traced from the timing of Lake Victoria annual maxima through the downstream of the White Nile system. Blue shading on the secondary (right) axis shows 180-day rolling rainfall totals over each basin. The panel sequence (Victoria–Kyoga, Kyoga–Albert, Albert–Sudd) highlights the progressive translation of flood waves through the connected lake–river–wetland network.

Wetland Activation and Flood Persistence

Satellite flood-extent maps reveal how the Sudd responded once the inflow arrived. The wetland expanded through multiple activation arms that progressively connected different sub-catchments:

  • 2019: rainfall-fed expansion on the east (Baro–Akobo–Sobat and White Nile sub-basins)
  • 2020–2021: a central-western arm from Bahr el Jebel extending into Bahr el Ghazal and a north-western connection from Bahr el Jebel to Bahr el Arab connected around Bentiu in Unity State.
  • 2022: The two activated arms persisted so the JJAS seasonal rainfall in South Sudan and the inflow from the upstream lakes just compounded the activation leading to the massive flooding in Bentiu, turning the town into an island surrounded by water.

This geometry confirms that the Sudd functions not as a single floodplain but as a network of hydraulically linked basins. Once activated, these wetlands store and recycle water through backwater effects, evaporation, and lateral flow between channels. That internal connectivity explains why flooding persisted long after rainfall declined.

The Bigger Picture

Understanding these long lags is vital for effective flood forecasting and anticipatory humanitarian action. Current early-warning systems in South Sudan and Uganda mainly rely on short-term rainfall forecasts, which cannot capture the multi-season cumulative storage and delayed release that drive multi-year flooding.

By the time floodwaters reach the Sudd Wetland, the hydrological signature of releases from Lake Victoria has been substantially transformed by storage, delay, and attenuation within the intermediate lakes and wetlands. This means that downstream flood conditions are not a direct reflection of upstream releases but the result of cumulative interactions across the basin’s interconnected reservoirs.

The results suggest that antecedent storage conditions in Lakes Victoria, Kyoga, and Albert should be incorporated into regional flood outlooks. When upstream lake levels are exceptionally high, downstream alerts should remain elevated even if rainfall forecasts appear moderate. This approach aligns with impact-based forecasting, where decisions are informed not only by rainfall predictions but also by hydrological memory, system connectivity and potential impact of the floods.

The 2019–2024 high-water regime joins earlier basin-wide flood episodes in the 1870s, 1910s, and 1960s, each linked to multi-year wet phases across the equatorial lakes. The 1961–64 event raised Lake Victoria by about 2.5 metres and reshaped the Nile’s flow for several years. The 1870s flood appears even more extensive, showing that compound, persistent flooding is part of the White Nile’s natural variability.

Climate-change attribution studies indicate that the 2019–2020 rainfall anomaly was intensified by anthropogenic warming, increasing both its magnitude and probability. If such events become more frequent, the basin’s long-memory behaviour could convert short bursts of rainfall into multi-year high-water regimes.

This work reframes how we view the White Nile. It is not a fast, responsive river system but a slow-moving memory corridor in which floodwaves propagate, store, and echo over many months. Recognising this behaviour opens practical opportunities: it enables longer forecast lead times based on upstream indicators, supports coordinated management of lake releases, and strengthens early-action planning for humanitarian agencies across the basin.

It also highlights the need for continued monitoring and data sharing across national borders. Sparse observations remain a major limitation: station gaps, satellite blind spots, and non-public lake-release data all reduce our ability to model the system in real time. Improving this observational backbone is essential if we are to translate scientific insight into effective flood preparedness.

By Douglas Mulangwa (PhD researcher, Department of Meteorology, University of Reading), with contributions from Evet Naturinda, Charles Koboji, Benon T. Zaake, Emily Black, Hannah Cloke, and Elisabeth M. Stephens.

Acknowledgements

This research was conducted under the INFLOW project, funded through the CLARE programme (FCDO and IDRC), with collaboration from the Uganda Ministry of Water and Environment, the South Sudan Ministry of Water Resources and Irrigation, the World Food Programme(WFP), IGAD Climate Prediction and Application Centre  (ICPAC), Médecins Sans Frontières (MSF), the Red Cross Red Crescent Climate Centre, Uganda Red Cross Society (URCS), the South Sudan Red Cross Red Crescent Society (SSRCS) and the Red Cross Red Crescent Climate Centre (RCCC).

Models and Memories: Our NCAS CMSS 2025 Experience

Piyali Goswami: p.goswami@pgr.reading.ac.uk

Mehzooz Nizar: m.nizar@pgr.reading.ac.uk

This September, we attended the NCAS Climate Modelling Summer School (CMSS), held at the University of Cambridge from 8th to 19th September. Five of us from the University of Reading joined this two-week residential programme. It was an intense and inspiring experience, full of lectures, coding sessions, discussions, and social events. In this blog, we would like to share our experiences.

 Picture 1: Group Picture of Students and teaching staff. One cohort, many time zones, zero dull moments…

About NCAS CMSS

The NCAS Climate Modelling Summer School (CMSS) is a visionary program, launched in 2007 with funding originating from grant proposals led by Prof. Pier Luigi Vidale. Run by leading researchers from the National Centre for Atmospheric Science and the University of Reading, it’s an immersive, practice-driven program that equips early-career researchers and PhD students with deeper expertise in climate modelling, Earth system science, and state-of-the-art computing. Held biennially in Cambridge, CMSS has trained 350 students from roughly 40 countries worldwide.

The CMSS 2025 brought together around 30 participants, including PhD students and professionals interested in the field of Climate Modelling. 

Long Days, Big Ideas: Inside Our Schedule

The school was full of activity from morning to evening. We started around 9:00 AM and usually wrapped up by 8:30 PM, with a good mix of lectures, practical sessions, and discussions that made the long days fly by.

Week 1 was led by Dr Hilary Weller, who ran an excellent series on Numerical Methods for Atmospheric Models. Mornings were devoted to lectures covering core schemes; afternoons shifted to hands-on Python sessions to implement and test the methods. Between blocks, invited talks from leading researchers across universities highlighted key themes in weather and climate modelling. After dinner, each day closed with a thought-provoking discussion on climate modelling, chaired by Prof. Pier Luigi Vidale, where participants shared ideas on improving models and their societal impact. 

The week concluded with group presentations summarising the key takeaways from Hilary’s sessions and our first collaborative activity that set the tone for the rest of the school. It was followed by a relaxed barbecue evening, where everyone finally had a chance to unwind, chat freely, and celebrate surviving our first week together. 

Picture 2 : Working on our group projects. Looks like NASA, feels like: ‘what’s our team name?’

Week 2 was all about getting hands-on with a climate model and learning how to analyse its output. We moved into group projects using SpeedyWeather.jl to design and run climate model experiments. It is a global atmospheric model with simplified physics, designed as a research playground. One of the developers of SpeedyWeather.jl, Milan Klöwer, was with us throughout the week to guide and support our work. Each team explored a different question, from sensitivity testing to analysing the model outputs, and spent the afternoons debugging, plotting, and comparing results. Evenings featured talks from leading scientists on topics such as the hydrological cycle, land and atmosphere interactions, and the carbon cycle. 

The week also included a formal dinner at Sidney Sussex, a welcomed pause before our final presentations. On Friday 19th of September, every group presented its findings before we all headed home. Some slides were finished only seconds before presenting, but the atmosphere was upbeat and supportive. It was a satisfying end to two weeks of hard work, shared learning, and plenty of laughter. A huge thank you to the teaching team for being there, from the “silly” questions to the stubborn bugs. Your patience, clarity, and genuine care made all the difference.

Picture 3: SpeedyWeather, as told by its favourite storyteller Milan, Picture 4: Pier Luigi probably preparing for the next summer school..

Coffee, Culture, and Climate Chat

The best part of the summer school was the people. The group was diverse: PhD students, and professionals from different countries and research areas. We spent nearly every moment together, from breakfast to evening socials, often ending the day with random games of “Would You Rather” or talking about pets. The summer school’s packed schedule brought us closer and sparked rich chats about science and life, everything from AI’s role in climate modelling to the policy levers behind climate action. We left with a lot to think about. Meeting people from around the world exposed us to rich cultural diversity and new perspectives on how science is practiced in different countries, insights that were both fresh and valuable. It went beyond training: we left with skills, new friends, and the seeds of future collaborations, arguably the most important part of research.

Picture 5: Barbecue evening after wrapping up the first week, Picture 6: Formal dinner at Sidney Sussex, one last evening together before the final presentations

Reflections and takeaways

We didn’t become expert modellers in two weeks, but we did get a glimpse of how complex and creative climate modelling can be. The group presentations were chaotic but fun. Different projects, different approaches, and a few slides that weren’t quite finished in time. Some of us improvised more than we planned, but the atmosphere was supportive and full of laughter. More than anything, we learned by doing and by doing it together. The long days, the discussions, and the teamwork made it all worthwhile.

If you ever get the chance to go, take it. You’ll come back with new ideas, good memories, and friends who make science feel a little more human.

For the future participants

The NCAS CMSS usually opens in early spring, with applications closing around June. With limited spots, selection is competitive and merit-based, evaluating both fit for the course and the expected benefit to the student.

Bring curiosity, enthusiasm, and a healthy dose of patience, you’ll need all three. But honestly, that’s what makes it fun. You learn quickly, laugh a lot, and somehow find time to celebrate when a script finally runs without error. By the end, you’ll be tired, happy, and probably a little proud of how much you managed to do (and probably a few new friends who helped you debug along the way).

The Making of STRATATOUILLE

By Elliot Mckinnon-Gray and Niamh Ocallaghan

The making of 2024’s departmental pantomime Stratatouille actually began all the way back in the summer of that year. There had been conversations on some stifling hot days (not that there were many!) around the theme for the current year’s panto. A film viewing had taken place and characters had begun to be assigned. However, come October, no one could have predicted the twist this tale would have taken leading to the majority of PhD students reluctantly(?) donning chef hats and rat ears for the best part of the coming December.

STRATATOUILLE Poster

Our story actually begins even earlier, mere days after the roaring success of sshRACC in December 2023 – as is apparently tradition – the fabled Panto Cupboard Key was foisted upon me by one of last year’s organisers. I won’t name and shame – but my fate was sealed; I was to become the organiser for the 2024 departmental pantomime. But who, pray, would heed my call for a co-organiser in this, my hour of need? It all came down to a rather conniving bit of deception, whereby I managed to trick my co-coordinator into accepting the key when they may have been expecting a tasty treat. Who says the pantomime is begrudgingly organised? But our destiny was well and truly decided and laid out in front of us. In nine to ten short months, we would be organising a corral of unruly PhD students and support staff to put on the world’s greatest annual university meteorology department pantomime.

Coming back to where we began, at the start of the academic year, we had an extremely strong candidate for what we thought was going to be the theme of the panto, but as always, the story for the panto is decided in the second(?) PhD Group Meeting of the year in a democratic process. This is where our original plan got unseated. The strongest proprietor (with many supporters) of the originally planned theme made the fatal mistake of prioritising career development over their wishes for a panto theme and could not make the deciding session. As Rabbie Burns reminds us, “The best laid plans of mice and men often go awry” – this was one such instance. With only half the organising committee present to propose the original idea, a plucky upstart with one good joke took the stage and captured the imagination of the PhD cohort, and so it was decided: Stratatouille would be the theme for this year’s panto. As fatefully predicted in last year’s panto blog post… 

Plot 

After innumerable lunchtime and evening writing sessions, the bulk of the panto story was baked and ready to consume. It is here where we have to give another massive thank you to Caleb Miller, who spent hours and hours essentially transposing the original story of the film Ratatouille to be based in our department following the terribly cobbled together idea of a story that we had. All we had to do now was pepper it with jokes and puns pertaining to food and/or meteorology and we had a script that even Patton Oswalt would be proud to perform. 

Audience review #1 

“The best panto I’ve seen for many years”

– Dr. Pete Inness 

The story begins with Remi the undergrat realising he feels unable to fulfil his ambition of doing serious research while surrounded by his decidedly unserious fellow undergrats. All they care about is getting drunk off snakebites, but Remi has a dream of becoming a great scientist and doing exceptional and interesting original research. While feeling dejected that he is too inferior to publish original research, he has an apparition of King Sir Professor Brian Hoskins* who gives him a message that anyone can be a scientist if they put their heart into it. 

Then we meet Linguini, a floundering PhD student who feels like he isn’t cut out for the work he is undertaking and expected to do. In a moment of serendipity, Linguini leaves his laptop open and unlocked in the BH coffee area where Remi is able to take a look at the work he is carrying out analysing some CheeseCDF files. Remi realises Linguini’s coding is terrible, fixes a few bugs and manages to greatly improve the code Linguini was working on. This leads to Linguini accepting help from Remi to write a paper as part of his PhD.

In the next scene, Linguini is showing Remi around his PhD office, when the WCD (Weekly Cuisine Discussion) bell goes, and all the PhD students diligently trudge down to GU01 to attend. Admittedly the WCD scene doesn’t further the story much apart from giving Remi an insight into the breadth of research done in this meteorology kitchen. But we got a lot of laughs, good jokes and puns, and silly costumes into this scene so it was an audience and cast favourite. It is later in this scene that we meet the terrifying supervisor, brilliantly played by our regular cartoon villain Catherine Toolan. The supervisor is very tough on Linguini with high expectations and little patience. But that is all too easy for Remi who manages to complete the task the supervisor asked for in no time at all. They (Linguini napping with his feet up) spend the next few hours “cooking up some actual research”. When the supervisor returns, she is amazed to see that ‘Linguini’ has disproved the entire concept of PV. Suspicious that he has managed to attain such a level of skill so quickly, she recommends that he first present the work at a conference before they crack on with publishing the work. 

At the conference, Linguini gives a great presentation (Remi is giving him slide-by-slide instructions) but makes a fatal error by taking nearly all of the credit and failing to mention he got any help from Remi. This alienates Remi who storms out of the conference to return to the department. Jumping forward in time, when Remi returns to and hatches a cunning plan to derail the entire department – stealing the tea and coffee money box (topical departmental news has appeared in the script!). Back at the conference, Remi is making a total fool of himself by not being able to answer even the simplest of questions from the audience, embarrassing his supervisor in the process. She interrogates him about this and finds out much to her dismay that an undergrat helped with the research. So disgusted is she at this that supervisor and the other staff members strike, leaving the department destitute of senior figures. 

This leads to a moment where Remi and Linguini make up thanks to an apology, and Remi recruits a team of undergrats to help finish writing the paper they started. The paper is submitted to the journal Nature: Valley Bar where it is eventually inspected by the feared Reviewer 2, who is so impressed by the work that he recommends it be published with no changes (apart from citing one of his own papers). The story ends with KSPBH* re-appearing and handing Remi the keys to the department and naming the building after him. 

Songs 

Please Stop Me Now – there was a running theme of ‘difficult to sing but possibly worth the effort since they are well loved tunes’ for most songs this year, and this one was no exception. A parody of Queen’s 1979 mega-hit of Don’t Stop Me Now, our extremely talented band carried our pretty rubbish singing – but that didn’t stop it being some attendees’ favourite part of the show. 

Audience review #2 

“How did you guys come up with all those song lyrics and make them work? So funny and so impressive!” 

– MSc Student 

Come on Remi – One of the more singable tunes based on Come on Eileen by Dexys Midnight Runners, all about how much work Remi was going to have to do to get Linguini through his PhD work. In practising this one, we had choir master Catherine bellowing at us to sing louder, a task we all found much easier after a few glasses of boxed wine from the Winnersh Sainsbury’s. The Middle – Jimmy Eat World was the third song which I don’t think we even came up with a spoof title for; a punk-pop particularly catchy tune about the trials and tribulations of poor Linguini the PhD first year who is letting his stress get in the way of enjoying the start of his PhD. Money, Money, Money – an ABBA classic we also didn’t need to change the title of about the rats stealing the money box. We made the bold decision this year to plant much of the songs mid-scene. A directorial choice that I think helped the coherent telling of the story. Special mention here to Nathan’s amazing piano playing skills here – the rendition of Erik Satie’s Gymnopedie during the John Meth-Coq-au-Vin monologue was only improvised in the final dress rehearsal earlier that day! 500 Lines – a version of the Proclaimers’ singalong classic 500 Miles about how many lines Remi has to write to get their paper done! H-O-S-K-I-N-S : I’m not sure how Sir Brian feels about being the subject of the panto or at least a song every year, and this one was a little on the nose; but you really couldn’t ask for a better fit for one of the songs of the summer – Chappell Roan’s Hot to Go had exactly the right mood for what we wanted to sing, and I think it made for a great outro wonderfully delivered by one of the best KSPBH performances we’ve seen in a while by our very own Douglas Mulangwa.

Casting 

It can be a bit like pulling teeth trying to cast the leading roles in the panto, and as one of the few first-year PhDs who have shown the extroversion to be able to tackle this and with great stage presence, the inimitable Jake Keller somewhat reluctantly agreed to be Remi with a fateful “if I have to” when asked repeatedly. I think he came around to really enjoying it, and the audience were also quite impressed –

Audience review #3 

[To Jake] “You were great!” 

– Regius Professor Keith Shine 

And Andrea Rivosecchi as Linguini – at first he accepted but then realised he would have to learn even more lines than the main character; so we looked around and found a great doppelganger for the second act – not sure if any of you noticed – but in the second act Linguini was played by a different Italian man in Riccardo Monfardini! Some veterans of the game came through and gave us some great performances with Catherine as Supervisor and Hette Houtman as Pete Dinners. Shout out to Hette as one of last year’s organisers for also helping us with timing and who to contact for various admin duties. The remaining roles had under 5 lines, but all were delivered hilariously and brilliantly, and you all appear to have agreed.

Audience Review #5 

“Catherine was quite scary as supervisor” 

– Dr Andy Apple Turnover Turner (Catherine’s PhD Supervisor) 

The Night (and Day) of the Panto 

So as many of you agreed, the Act 1 cameo from our antipodean friend Robbie Marks (the star of last year’s panto) was one of the best moment’s of the panto: 

Audience review #4 

“I can’t believe Robby came through and made that for us!” 

– Gabrielle Ching-Johnson (Undergrat #2) 

This is where I would like to make the point that he sent me those videos the morning of the show, and we had to hurriedly stitch together his several renditions of that speech in different locations with the cinematic walk off. Special thanks to Rosie (last year’s co-organiser) for helping with the video editing, and generally for being a great help to us organisers this year by giving advice and keeping us on time (mostly). Robby was sent a video of the mirthful reaction to his cameo with the reaction “F*** yeah, glad I could make an appearance”. The day went much more smoothly than last year, with us occupying the Madejski lecture theatre from 2pm onwards with no interruptions, we had plenty of time to set up the tech and instruments, as well as squeeze in a final full rehearsal. Set up the ticket booth, and we were ready to go! 150 people filed in for a great attendance to our show. Not to forget a great buffet beforehand to get everyone in the mood for the flagship event in the departmental calendar. 

Act 1 and Act 2 managed to run for about the same amount of time, 30 mins a piece for an hour-long panto, as we had planned – brilliant! The interval acts, however put paid to that. A mammoth 45-minute session full of controversy and some of the biggest laughs of the night. We saw Stroopwaffels crowned the winner of the big biscuit bracket, however this was vetoed by the head of department who quite rightly pointed out they are not a biscuit and so the runner up chocolate hobnobs was our true champion. Professor Coq-au-Vin was not the only one to take issue with this controversial result. The 3L68 team of Dan Shipley, Jake Bland and Brian Lo made the argument that Bourbons had been wrongfully expelled and would have won this year, and so Dan delivered a hilarious diatribe explaining how they came to decide which Bourbon was best, and therefore the true winner of the biscuit bracket. I don’t remember which one it was in the end (M&S?) – check the video recordings of the night to find out for yourselves. 

A pleasant break from the commotion of the biscuit brackets was brought around from some classical piano performed by Amber Te Winkel, and then some might say the only reason they attend the panto – Mr Mets. A blinder of an episode where Peter Clark was apologised to (again – and rejected on his behalf by Humphrey), and insinuated to have signed up to OnlyFans with the most innocent of intentions. The theme of the story was John Methven’s takeover as head of department, with him bumbling along and struggling to fulfil the role while eating copious amounts of ‘free’ food (it’s not free if you use department funds to pay for it, Prof. Methven!). Just to clarify that no one thinks John will struggle to fulfil the role, but as HOD I’m afraid you have to expect a fair bit of derision at these sorts of performances! 

Following that, another side-splittingly funny act followed with an after party led by DJ Shonk that included a rare slow number – all in aid of blossoming romance on the dancefloor. 

The amazing cast and crew who made STRATATOUILLE happen!

Reflections 

As always, the Panto is months of hard work to organise, and things only ever seem to come together in the eleventh hour. But we had a great team and cast and band that really made it come together beautifully. Acting on stage, playing in a live band, organising a production, generally being a thesp isn’t the kind of thing you expect to hear from a large majority of the PhD students of the world’s leading Meteorology department. But it is these experiences, very far outside most of our comfort zones that builds strong and adaptable characters. And I think this experience has probably given us, as organisers and performers alike, more useful skills than we might have realised. This will, however, probably be these director-producers’ debut and final production. 

A huge thank you to everyone who attended and contributed to the panto in any way, no matter how small. Your participation is what makes this a great bonding experience for the department, and you are all greatly appreciated! 

One last time, 

Your Panto Organisers 
Elliot and Niamh 

Preparing for the assimilation of future ocean-current measurements

By Laura Risley

Ocean data assimilation (DA) is vital. Firstly, it is essential to improving forecasts of ocean variables. Not only that, the interaction between the ocean and atmosphere is key to numerical weather prediction (NWP) as coupled ocean-atmosphere DA schemes are used operationally.  

At present, observations of the ocean currents are not assimilated operationally. This is all set to change, as satellites are being proposed to measure these ocean currents directly. Unfortunately, the operational DA systems are not yet equipped to handle these observations due to some of the assumptions made about the velocities. In my work, we propose the use of alternative velocity variables to prepare for these future ocean current measurements. These will reduce the number of assumptions made about the velocities and is expected to improve the NWP forecasts.

What is DA? 

DA combines observations and a numerical model to give a best estimate of the state of our system – which we call our analysis. This will lead to a better forecast. To quote my lunchtime seminar ‘Everything is better with DA!’

Our model state usually comes from a prior estimate which we refer to as the background. A key component of data assimilation is that the errors present in both sets of data are taken into consideration. These uncertainties are represented by covariance matrices. 

I am particularly interested in variational data assimilation, which formulates the DA problem into a least squares problem. Within variational data assimilation the analysis is performed with a set of variables that differ from the original model variables, called the control variables. After the analysis is found in this new control space, there is a transformation back to the model space. What is the purpose of this transformation? The control variables are chosen such that they can be assumed approximately uncorrelated, reducing the complexity of the data assimilation problem.

Velocity variables in the ocean 

My work is focused on the treatment of the velocities in NEMOVAR. This is the data assimilation software used by the NEMO ocean model, used operationally at the Met Office and ECMWF. In NEMOVAR the velocities are transformed to their unbalanced components, and these are then used as control variables. The unbalanced components of the velocities are highly correlated, therefore contradicting the assumption made about control variables. This would result in suboptimal assimilation of future surface current measurements – therefore we seek alternative velocity control variables. 

The alternative velocity control variables we propose for NEMOVAR are unbalanced streamfunction and velocity potential. This would involve transforming the current control variables, the unbalanced velocities, to these alternative variables using Helmholtz Theorem. This splits a velocity field into its nondivergent (streamfunction) and irrotational (velocity potential) parts. These parts have been suggested by Daley (1993) as more suitable control variables than the velocities themselves. 

Numerical Implications of alternative variables 

We have performed the transformation to these proposed control variables using the shallow water equations (SWEs) on a 𝛽-plane. To do so we discretised the variables on the Arakawa-C grid. The traditional placement of streamfunction on this grid causes issues with the boundary conditions. Therefore, Li et al. (2006) proposed placing streamfunction in the centre of the grid, as shown in Figure 1. This circumvents the need to impose explicit boundary conditions on streamfunction. However, using this grid configuration leads to numerical issues when transforming from the unbalanced velocities to unbalanced streamfunction and velocity potential. We have analysed these theoretically and here we show some numerical results.

Figure 1: The left figure shows the traditional Arakawa-C configuration (Lynch (1989), Watterson (2001)) whereby streamfunction is in the corner of each grid cell. The right figure shows the Arakawa-C configuration proposed by Li et al. (2006) where streamfunction is in the centre of the grid cell. The green shaded region represents land. 

Issue 1: The checkerboard effect 

The transformation from the unbalanced velocities to unbalanced streamfunction and velocity potential involves averaging derivatives, due to the location of streamfunction in the grid cell. This process causes a checkerboard effect – whereby we have numerical noise entering the variable fields due to a loss of information. This is clear to see numerically using the SWEs. We use the shallow water model to generate a velocity field. This is transformed to its unbalanced components and then to unbalanced streamfunction and velocity potential. Using Helmholtz Theorem, the unbalanced velocities are reconstructed. Figure 2 shows the checkboard effect clearly in the velocity error.

Figure 2: The difference between the original ageostrophic velocity increments, calculated using the SWEs, and the reconstructed ageostrophic velocity increments. These are reconstructed using Helmholtz Theorem, from the ageostrophic streamfunction and velocity potential increments. On the left we have the zonal velocity increment error and on the right the meridional velocity increment error. 

Issue 2: Challenges in satisfying the Helmholtz Theorem 

Helmholtz theorem splits the velocity into its nondivergent and irrotational components. We discovered that although streamfunction should be nondivergent and velocity potential should be irrotational, this is not the case at the boundaries, as can be seen in figure 3. This implies the proposed control variables are able to influence each other on the boundary. This would lead to them being strongly coupled and therefore correlated near the boundaries. This directly conflicts the assumption made that our control variables are uncorrelated. 

Figure 3: Issues with Helmholtz Theorem near the boundaries. The left shows the divergence of the velocity field generated by streamfunction. The right shows the vorticity of the velocity field generated by velocity potential. 

Overall, in my work we propose the use of alternative velocity control variables in NEMOVAR, namely unbalanced streamfunction and velocity potential. The use of these variables however leads to several numerical issues that we have identified and discussed. A paper on this work is in preparation, where we discuss some of the potential solutions. Our next work will further this investigation to a more complex domain and assess our proposed control variables in assimilation experiments. 

References: 

Daley, R. (1993) Atmospheric data analysis. No. 2. Cambridge university press. 

Li, Z., Chao, Y. and McWilliams, J. C. (2006) Computation of the streamfunction and velocity potential for limited and irregular domains. Monthly weather review, 134, 3384–3394. 

Lynch, P. (1989) Partitioning the wind in a limited domain. Monthly weather review, 117, 1492–1500. 

Watterson, I. (2001) Decomposition of global ocean currents using a simple iterative method. Journal of Atmospheric and Oceanic Technology, 18, 691–703

Nature vs Nurture in Convective-Scale Ensemble Spread

By Adam Gainford

Quantifying the uncertainty of upcoming weather is now a common procedure thanks to the widespread use of ensemble forecasting. Unlike deterministic forecasts, which show only a single realisation of the upcoming weather, ensemble forecasts predict a range of possible scenarios given the current knowledge of the atmospheric state. This approach allows forecasters to estimate the likelihood of upcoming weather events by simply looking at the frequency of event occurrence within all ensemble members. Additionally, by sampling a greater range of events, this approach highlights plausible worst-case scenarios, which is of particular interest for forecasts of extreme weather. Understanding the realistic range of outcomes is crucial for forecasters to provide informed guidance, and helps us avoid the kind of costly and embarrassing mistakes that are commonly associated with the forecast of “The Great Storm of 1987”*.

To have trust that our ensembles are providing an appropriate range of outputs, we need some method of verifying ensemble spread. We do this by calculating the spread-skill relationship, which essentially just compares the difference between member values to the skill of the ensemble as a whole. If the spread-skill relationship is appropriate, spread and skill scores should be comparable when averaged over many forecasts. If the ensemble shows a tendency to produce larger spread scores than skill scores, there is too much spread and not enough confidence in the ensemble given its accuracy: i.e., the ensemble is overspread. Conversely, if spread scores are smaller than skill scores, the ensemble is too confident and is underspread. 

Figure 1: Postage stamp plots showing three-hourly precipitation accumulation valid for 2023-07-08 09Z at leadtime T+15 h. There is reasonable spread within both the frontal rain band effecting areas of SW England and Wales, and the convective features ahead of this front.

My PhD work has focussed on understanding the spread-skill relationship in convective-scale ensembles. Unlike medium range ensembles that are used to estimate the uncertainty of synoptic-scale weather at daily-to-weekly leadtimes, convective-scale ensembles quantify the uncertainty of smaller-scale weather at hourly-to-daily leadtimes. To do this, convective-scale ensembles must be run at higher resolutions than medium-range ensembles, with grid spacings smaller than 4 km. These higher resolutions allows the ensemble to explicitly represent convective storms, which has been repeatedly shown to produce more accurate forecasts compared coarser-resolution forecasts that must instead rely on convective parametrizations. However, running models at such high resolutions is too computationally expensive to be done over the entire Earth, so they are typically nested inside a lower-resolution “parent” ensemble which provides initial and boundary conditions. Despite this, researchers often report that convective-scale ensembles are underspread, and the range of outputs is too narrow given the ensemble skill. This is corroborated by operational forecasters, who report that the ensemble members often stay too close to the unperturbed control member. 

To provide the necessary context for understanding the underspread problem, many studies have examined the different sources and behaviours of spread within convective-scale ensembles. In general, spread can be produced through three different mechanisms: firstly, through differences in each member’s initial conditions; secondly, through differences in the lateral boundary conditions provided to each member; and thirdly, through the different internal processes used to evolve the state. This last source is really the combination of many different model-specific factors (e.g., stochastic physics schemes, random parameter schemes etc.), but for our purposes this represents the ways in which the convective-scale ensemble produces its own spread. This contrasts with the other two sources of spread, which are directly linked to the spread of the parent ensemble.  

The evolution of each of these three spread sources is shown in Fig. 2. At the start of a forecast, the ensemble spread is entirely dictated by differences in the initial conditions provided to each ensemble member. As we integrate forward in time, though, this initial information is removed from the domain by the prevailing winds and replaced by information arriving through the boundaries. At the same time, internal model processes start spinning up additional detail within each ensemble member. For a UK-sized domain, it takes roughly 12 hours for the initial information to have fully left the domain, though this is of course highly dependent on the strength of the prevailing winds. After this time, spread in the ensemble is partitioned between internal processes and boundary condition differences.  

Figure 2: Attribution of spread within a convective-scale ensemble by leadtime. 

While the exact partitioning in this schematic shouldn’t be taken too literally, it does highlight the important role that the parent ensemble plays in determining spread in the child ensemble. Most studies which try to improve spread target the child ensemble itself, but this schematic shows that these improvements may have quite a limited impact. After all, if the spread of information arriving from the parent ensemble is not sufficient, this may mask or even overwhelm any improvements introduced to the child ensemble.  

However, there are situations where we might expect internal processes to show a more dominant spread contribution. Forecasts of convective storms, for instance, typically show larger spread than forecasts of other types of weather, and are driven more by local processes than larger-scale, external factors.

This is where our “nature” and “nurture” analogy becomes relevant. Given the similarities of this relationship to the common parent-child theory in behavioural psychology, we thought it would be a fun and useful gimmick to also use this terminology here. So, in the “nature” scenario, each child member shows large similarity to the corresponding parent member, which is due to the dominating influence of genetics (initial and boundary conditions). Conversely, in the “nurture” scenario, spread in the child ensemble is produced more by its response to the environment (internal processes), and as such, we see larger differences between each parent-child pair.  

While the nature and nurture attribution is well understood for most variables, few studies have examined the parent-child relationship for precipitation patterns, which are an important output for guidance production and require the use of neighbourhood-based metrics for robust evaluation. Given that this is already quite a long post, I won’t go into too much detail of our results looking at nature vs nurture for precipitation patterns. Instead, I will give a quick summary of what we found: 

  • Nurture provides a larger than average influence on the spread in two situations: during short leadtimes**, and when forecasting convective events driven by continental plume setups. 
  • In the nurture scenarios, spread is consistently larger in the child ensemble than the parent ensemble. 
  • In contrast to the nurture scenarios, nature provides larger than average spread at medium-to-long leadtimes and under mobile regimes, which is consistent with the boundary arguments mentioned previously. 
  • Spread is very similar between the child and parent ensembles in the nurture scenarios.  

If you would like to read more about this work, we will be submitting a draft to QJRMS very soon.  

To conclude, if we want to improve the spread of precipitation patterns in convective-scale ensembles, we should direct more attention to the role of the driving ensemble. It is clear that the exact nesting configuration used has a strong impact on the quality of the spread. This factor is especially important to consider given recent experiments with hectometric-scale ensembles which are themselves nested within convective-scale ensembles. With multiple layers of nesting, the coupling between each ensemble layer is likely to be complex. Our study provides the foundation for investigating these complex interactions in more detail. 

* This storm was actually well forecast by the Met Office. The infamous Michael Fish weather update in which he said there was no hurricane on the way was referring to a different system which indeed did not impact the UK. Nevertheless, this remains a good example of the importance of accurately predicting (and communicating) extreme weather events.  

** While this appears to be inconsistent with Fig. 2, the ensemble we used does not solely take initial conditions from the driving ensemble. Instead, the ensemble uses a separate, high-resolution data assimilation scheme to the parent ensemble. Each ensemble is produced in a way which makes the influence of the data assimilation more influential to the spread than the initial condition perturbations. 

The 5th ICTP Summer School on Modelling of Climate Dynamics: Convection and Clouds, and Conference on Convective Organisation (WCO4)

By Juan Garcia Valencia

In the tropics, organised convective systems provide the majority of precipitation and are often responsible for extreme events. To understand these systems, researchers now use kilometre-resolution (k-scale) global and regional convection-permitting models, along with the latest satellite observations. Machine learning tools have also emerged as important supplements to our dynamical and thermodynamic understanding.

It’s crucial to understand these tools to address key questions such as:

  • How do deep and shallow convection organise in k-scale models?
  • Can energy budgets help explain their precipitation biases?
  • What are the recent advances in convective parameterisation?

These questions were the focus of the “5th Summer School on Theory, Mechanisms and Hierarchical Modelling of Climate Dynamics: Convection and Clouds,” which I had the privilege of attending from the 1st–19th of July 2024 at the International Centre for Theoretical Physics (ICTP) in Trieste, Italy.

Picture 1 and 2. First lecture and campus. 

The program offered a mix of introductory and advanced lectures, hands-on data analysis through participant projects, and the chance to get involved in the “4th Workshop on Convective Organisation and Precipitation Extremes (WCO4).” The opportunity to attend arose because two of my supervisors, Chris Holloway and Lorenzo Tomassini, presented their work at the conference and taught some of the lectures in the course. As a PhD student researching monsoons using kilometre-scale simulations, I also felt like I had to attend! 

The three-week schedule was intense, with most days running from 9 AM to 6 PM (thankfully with plenty of coffee breaks and a long lunch). A typical day began with lectures from leading experts like Simona Bordoni, Robert Pincus, and Courtney Schumacher. Topics ranged from convection and radiation to RCE, stochastic parameterisation, and observations. Afternoons were usually dedicated to computer lab sessions or group project work.

Picture 3. Attendees of the summer school. 

The second week centred on the WCO4 conference, covering topics from convective self-aggregation in idealized experiments to precipitation extremes associated with organized convection and optimizing our use of observational data. Students had the opportunity to present posters on their research—an incredibly valuable experience for me as I received loads of useful feedback about my ideas and goals. This was also my first time presenting research at an international event, so it was great to show what I’ve been working on in front of all the attendees and meet so many people genuinely interested in my work.

The final week focused on hands-on projects. In groups of 4–5, we analysed numerical model data and presented our results to everyone. My group examined how precipitation extremes change in a warming world using NextGEMS data, but every group had different topics that they had chosen according to their interest and expertise. Many of the tasks and analyses we did were similar to my first-year work, so I left with a plenty of new ideas for my research!

Picture 4. End of group project presentation and poster presentation. 

Being an international centre, the school and conference brought together staff and students from all corners of the globe—one of my favourite aspects of the course. Despite knowing no one beforehand, I quickly got to know other PhDs and post-docs from various institutions, all working on projects similar to mine. I felt at ease in this new environment, making friends and meeting potential future colleagues!

Another fantastic aspect of this summer school was its stunning location on Italy’s sunny, warm northern Adriatic coast. After each day’s activities, we were free to spend our evenings as we pleased. This was the perfect opportunity to relax by the sea, swim, and explore Trieste’s picturesque town centre. More often than not, we’d venture into town for pizza and, of course, gelato!

Starting Your PhD Journey: Tips for Success

So, you’ve officially embarked on the exciting journey that is a PhD—congrats! You’ve reached a major milestone, and whether you’re feeling excited, overwhelmed, or a mix of both, just know you’ve signed up for an adventure like no other. A PhD is an incredible opportunity to dive headfirst into a subject you’re passionate about, build a toolkit of valuable skills, and—who knows?—maybe even make history in your field.

But let’s be real: it’s not all rainbows and groundbreaking discoveries. The PhD life can be challenging, sometimes feeling like a marathon through an obstacle course. You’ll have moments that test your patience, confidence, and sometimes, your sanity. That’s why here at Social Metwork, we’ve gathered some golden advice from seasoned PhD students to help you navigate these waters. Our goal? To make this transition into PhD life a little smoother, maybe even a little fun.

We’ll break these tips down into three areas: navigating day-to-day life as a PhD student, getting organized like a pro, and growing into the great scholar you’re destined to be. Ready? Let’s dive in!

1. Navigating Day-to-day Life as a PhD Student

Work-life balance

The first year of your PhD can feel overwhelming as you try to juggle research, coursework, and life. One key piece of advice? Don’t overwork yourself. As Laura Risley puts it, “Sometimes if you’re struggling with work, an afternoon off is more useful than staying up late and not taking a break.” It’s easy to get absorbed in your work, but stepping away to recharge can actually help you return with fresh perspectives.

Getting involved in activities outside your PhD is another great way to maintain balance (L. Risley, 2024). Whether it’s exploring more of Reading, participating in a hobby, or just getting outside for some fresh air, your brain will thank you for the break. Remember, “Your PhD is important, but so is your health,” so make sure to take care of yourself and make time for things that bring you joy: exercise, good food, and sleep!

Lastly, don’t underestimate the power of routine. Building a consistent schedule can help bring some stability to PhD life. Most importantly, be kind to yourself. The weight of expectations can be heavy so give yourself permission to not have it all figured out yet. You won’t understand everything right away, and that’s completely normal!

Socialising and Building a Support System

Your cohort is your lifeline. The people you start with are going through the same experiences, and they will be your greatest support system. Whether you’re attending department events, organizing a BBQ, or just grabbing a coffee, socializing with your peers is a great way to get through everything. At the end of the day, we are all in this together! As Rhiannon Biddiscombe wisely says, “Go for coffee with people, go to Sappo, enjoy the pub crawls, waste a night out at PT, take part in the panto, spend time in the department in-person” — so make sure you get involved!

If what you want is to meet new people, you could even help organise social events, like research groups or casual hangouts – feeling connected within your department can make all the difference when you’re having a tough week. And hey, if you’re looking for a fun group activity, “Market House in town has darts boards, ping pong tables, and shuffleboard (you slide little discs to the end of the board, it’s good fun!)”.  

2. Getting Organised Like a Pro

Writing and Coding

Staying organised is critical for both your mental health and your research. Adam Gainford recommends you start by setting up a reference manager early on—trust us, you’ll thank yourself later. And if your research involves coding, learn version control tools like GitHub to keep your projects neat and manageable. As a fellow PhD student says “Keeping organised will help keep your future self sane (and it’s a good skill that will help you with employability and future group projects)”.

A golden rule for writing: write as you go. Don’t wait until the last minute to start putting your thoughts on paper. Whether it’s jotting down a few ideas, outlining a chapter, or even starting a draft, regular writing will save you from stress later on. Remember what Laura always says, “It’s never too early to start writing.”

Time Management

Managing your time as a PhD student is a balancing act. Plans will shift, deadlines will change, and real life will get in the way—it’s all part of the process. Instead of stressing over every slipped deadline, try to “go with the flow”. Your real deadlines are far down the road, and as long as you’re progressing steadily, you’re doing fine.

Being organised also doesn’t have to be complicated. Some find it helpful to create daily, weekly, or even monthly plans. Rhiannon recommends keeping a calendar is a great way to track meetings, seminars, and research group sessions – I myself could not agree more and find time-blocking is a great way to make sure everything gets done. Regarding your inbox, make sure you “stay on top of your emails but don’t look at them constantly. Set aside a few minutes a day to look at emails and sort them into folders, but don’t let them interrupt your work too much!”. Most importantly though, don’t forget to schedule breaks—even just five minutes of stepping away can help you reset (and of course, make sure you have some valuable holiday time off!).

3. Growing into the Scholar You’re Meant to Be

Asking for Help

This journey isn’t something you’re expected to do alone. Don’t be afraid to reach out for help from your friends, supervisors, or other PhD students. Asking questions is a sign of strength, not weakness. What’s great is that everyone has different backgrounds, and more often than not, someone will be able to help you navigate whatever you’re facing (trust me, as a geography graduate my office mates saved my life with atmospheric physics!). Whether you’re stuck on a tricky equation or need clarification on a concept, ask ask ask! 

“You’ve got a whole year to milk the ‘I’m a first year’ excuse, but in all seriousness, its never too late to ask when you’re unsure!” – a fellow PhD student.

Navigating Supervisor Meetings

Your supervisors are there to guide you, but communication is key. Be honest with them, especially when you’re struggling or need more support. If something doesn’t make sense, speak up—don’t nod along and hope for the best, “they should always have your back” (it will also be very embarrassing if you go along with it and are caught out with questions…). 

Also, “If you know some things you want to get out of your PhD, communicate that with your supervisors”. Open communication will help you build a stronger working relationship and ensure you get what you need from the process.

Dealing with Imposter Syndrome

Imposter syndrome can hit hard during a PhD, especially when you’re surrounded by brilliant people doing impressive work. But here’s the thing: don’t compare yourself to others. Everyone’s PhD is different—some projects lend themselves to quick results, while others take longer. Just because someone publishes early doesn’t mean your research is less valuable or that you’re behind – we are all on our own journeys. 

And remember, no one expects you to know everything right away. “There might be a pressure, knowing that you’ve been ‘handpicked’ for a project, that you should know things already; be able to learn things more quickly than you’re managing; be able to immediately understand what your supervisor is talking about when they bring up XYZ concept that they’ve been working on for 20+ years. In reality, no reasonable person expects you to know everything or even much at all yet. You were hand-picked for the project because of your potential to eventually become an independent researcher in your field – A PhD is simply training you for that, so you need to finish the PhD to finish that training.”

If you’d struggling with imposter syndrome, or want to learn about ways to deal with it, I highly recommend attending the imposter syndrome RRDP. 

A Few Final Words of Wisdom

The PhD rollercoaster is full of ups and downs, but remember, you’re doing fine. “If you’re supervisors are happy, then don’t worry! Everything works out in the end, even when it seems to not be working for a while! “– Laura Risley

It’s also super important to enjoy the process. You’ve chosen a topic you’re passionate about, and this is a rare opportunity to fully immerse yourself in it. Take advantage of that! Don’t shy away from opportunities to share your work. Whether it’s giving a talk, presenting a poster (or writing for the Social Metwork blog!!), practice makes perfect when it comes to communicating your research.

Embarking on a PhD is no small feat, but hopefully with these tips, you’ll have the tools to manage the challenges and enjoy the ride. And if all else fails, remember the most important advice of all: “Vote in the Big Biscuit Bracket—it’s the most important part of being a PhD student!”. 

From the department’s PhDs students to you! 

Written by Juan Garcia Valencia 

The Mystery of Coarse Dust Transport in Observations and Models​

Natalie Ratcliffe – n.ratcliffe@pgr.reading.ac.uk

On Tuesday 23rd April 2024, I presented my PhD work at the lunchtime seminar to the department.  The work I presented incorporated a lot of the work I have achieved during the 3 and a half years of my PhD. This blog post will be a brief overview of the work discussed.

Every year, between 300 and 4000 million tons of mineral dust are lofted from the Earth’s surface (Huneeus et al., 2011; Shao et al., 2011). This dust can travel vast distances, affecting the Earth’s radiative budget, water and carbon cycles, fertilization of land and ocean surfaces, as well as aviation, among other impacts. Observations from recent field campaigns have revealed that we underestimate the amount of coarse particles (>5 um diameter) which are transported long distances (Ryder et al., 2019). Based on our understanding of gravitational settling, some of these particles should not physically be able to travel as far as they do. This results in an underestimation of these particles in climate models, as well as a bias towards modelling finer particles (Kok et al., 2023). Furthermore, fine particles have different impacts on the Earth than coarse particles, for example with the radiative budget at the top of the atmosphere; including more coarse particles in a model reduces the cooling effect that dust has on the Earth.

Thus, my PhD project was born! We wanted to try and peel back the layers of the dusty onion. How are these coarse particles travelling so far?

Comparing a Climate Model and Observations

First, we compared in-situ aircraft observations to a climate model simulation to assess the degree to which the model was struggling to represent coarse particle transport from the Sahara across the Atlantic to the Caribbean. Measuring particles up to 300 um in diameter, the Fennec, AER-D and SALTRACE campaigns provide observations at three stages of transport throughout the lifetime of dust in the atmosphere (near emission, moving over the ocean and at distance from the Sahara; Figure 1). Using these observations, we assess a Met Office Unified Model HadGEM3 configuration. This model has six dust size bins, ranging from 0.063-63.2 um diameter. This is a much larger upper bound than most climate models, which tend to have an upper bound at 10-20 um.

Figure 1: Map showing the location of the flight tracks which were taken when the observations were measured.

We found that the model significantly underestimates the total mass of mineral dust in the atmosphere, as well as the fraction of dust mass made up of coarse particles. This happens at all locations, including at the Sahara: firstly, this suggests that the model is not emitting enough coarse particles to begin with and secondly, the growing model underestimation with distance suggests that the coarse particles are being deposited too quickly. By looking further into the model, we found that the coarsest particles (20-63.2 um) were lost from the atmosphere very quickly, barely surpassing Cape Verde in their westwards transport. Whereas in the observations, these coarsest particles were still present at the Caribbean, representing ~20% of the total dust mass. We also found that the distribution of coarse particles tended to have a stronger dependence on altitude than in the observations, with fewer particles observed at higher altitudes. This work has been written up into a paper which is currently undergoing review, but can be seen in preprint; Ratcliffe et al., (preprint).

Sensitivity Testing of the Model

Now that we have confirmed that the model is struggling to retain coarse particles for long- range transport, we want to work out if any of the model processes involved in transport and deposition could be over- or under-active in coarse particle transport. This involved turning off individual processes one at a time and seeing what impact it has on the dust transport. As we wanted to focus on the impact to coarse particle transport, we needed to start with an improved emission distribution at the Sahara, so we tuned the model to better match the observations from the Fennec campaign.

In our first tests we decided to ‘turn off’ or reduce gravitational settling of dust particles in the model to see what happens if we eliminate the greatest removal mechanism for coarse particles. Figure 2 shows the volume size distribution of these gravitational settling model experiments against the observations. We found that completely removing gravitational settling increased the mass of coarse particles too much, while having little to no effect on the fine particles. We found that to bring the model into better agreement with the observations, sedimentation needs to be reduced by ~50% at the Sahara and more than 80% at the Caribbean.

Figure 2: Mean volume size distribution between 2500-3000 m in the Fennec (red), AER-D (orange) and SALTRACE (yellow) observations, the control mode simulation (black) and the reduced dust sedimentation experiments (blue shades).

We also tested the sensitivity of turbulent mixing, convective mixing and wet deposition on coarse dust transport; however, these experiments did not have as great of an impact on coarse transport as the sedimentation. We found that removing the mixing mechanisms resulted in decreased vertical transport of dust which tended to reduce the horizontal transport. We also carried out an experiment where we doubled the convective mixing, and this did show improved vertical and horizontal transport. Finally, when we removed wet deposition of dust, we found that it had a greater impact on the fine particles, less so on the coarse particles, suggesting that wet deposition is the main removal mechanism for the four finest size bins in the model.

Final Experiment

Now that we know our coarse particles are settling out too quickly and sit a bit too low in the atmosphere, we come to our final set of experiments. Let’s say that our coarse particles in the model and our dust scheme are actually set up perfectly, then could it be the meteorology in the model which is wrong? If the coarse particles were mixed higher up at the Sahara, then would they reach faster horizontal winds to travel further across the Atlantic? To test this theory, I hacked the files which the model uses to start a simulation, and I put all the dust over the Sahara up to the top of the dusty layer (~5 km). We found this increased the lifetime of the coarsest particles so that it took twice as long to lose 50% of the starting mass. This unfortunately only slightly improved transport distance as the particles were still lost relatively quickly. After checking the vertical winds in the model, we found that they were an order of magnitude smaller at the Sahara, Canaries and Cape Verde than the observations made during the field campaigns. This suggests that if the vertical winds were stronger, they could initially raise the dust higher and keep the coarse particles raised higher for longer, extending their atmospheric lifetime.

Summarised Conclusions

To summarise what I’ve found during my PhD:

  1. The model underestimates coarse mass at emission and the underestimation is exacerbated with westwards transport.
  2. Altering the settling velocity of dust in the model brings the model into better agreement with the observations.
    • a. Turbulent mixing, convective mixing and wet deposition have minimal impact on coarse transport.
  3. Lofting the coarse particles higher initially improves transport minimally.
    • a. Vertical winds in the model are an order of magnitude too small.

So what’s next?

If we’ve found that the coarse particles are settling out the atmosphere too quickly (by potentially more than 80%), would that suggest that the deposition equations are wrong and are overestimating particle deposition? So, we change those and everything’s fixed, right? I wish. Unfortunately, the deposition equations are one of the things that we are more scientifically sure of, so our results mean that there’s something happening to the coarse particles that we aren’t modelling which is able to counteract their settling velocity by a very significant amount. Our finding that the vertical winds are too small could be a part of this. Other recent research suggests that processes such as particle asphericity, triboelectrification, vertical mixing and turbulent mixing (has been shown to help in a higher-resolution (not climate) model) in the atmosphere could enhance coarse particle transport.

Huneeus, N., Schulz, M., Balkanski, Y., Griesfeller, J., Prospero, J., Kinne, S., Bauer, S., Boucher, O., Chin, M., Dentener, F., Diehl, T., Easter, R., Fillmore, D., Ghan, S., Ginoux, P., Grini, A., Horowitz, L., Koch, D., Krol, M. C., Landing, W., Liu, X., Mahowald, N., Miller, R., Morcrette, J.-J., Myhre, G., Penner, J., Perlwitz, J., Stier, P., Takemura, T., and Zender, C. S. 2011. Global dust model intercomparison in AeroCom phase I. Atmospheric Chemistry and Physics. 11(15), pp. 7781-7816

Kok, J. F., Storelvmo, T., Karydis, V. A., Adebiyi, A. A., Mahowald, N. M., Evan, A. T., He, C., and Leung, D. M. Jan. 2023. Mineral dust aerosol impacts on global climate and climate change. Nature Reviews Earth Environment 2023, pp. 1–16. url: https://www.nature.com/articles/s43017-022-00379-5

RatcliLe, N. G., Ryder, C. L., Bellouin, N., Woodward, S., Jones, A., Johnson, B., Weinzierl, B., Wieland, L.-M., and Gasteiger, J.: Long range transport of coarse mineral dust: an evaluation of the Met Office Unified Model against aircraft observations, EGUsphere [preprint], https://doi.org/10.5194/egusphere-2024-806, 2024

Ryder, C. L., Highwood, E. J., Walser, A., Seibert, P., Philipp, A., and Weinzierl, B. 2019. Coarse and giant particles are ubiquitous in Saharan dust export regions and are radiatively significant over the Sahara. Atmospheric Chemistry and Physics. 19(24), pp. 15353–15376

Shao, Y., Wyrwoll, K.-H., Chappell, A., Huang, J., Lin, Z., McTainsh, G. H., Mikami, M., Tanaka, T. Y., Wang, X., and Yoon, S. 2011. Dust cycle: An emerging core theme in Earth system science. Aeolian Research. 2(4), pp. 181–204