Tiger Teams: Using Machine Learning to Improve Urban Heat Wave Predictions

Adam Gainford a.gainford@pgr.reading.ac.uk

Brian Lobrian.lo@pgr.reading.ac.uk

Flynn Ames – f.ames@pgr.reading.ac.uk

Hannah Croad – h.croad@pgr.reading.ac.uk  

Ieuan Higgs  – i.higgs@pgr.reading.ac.uk

What is Tiger Teams?  

You may have heard the term Tiger Teams mentioned around the department by some PhD students, in a SCENARIO DTP weekly update email or even in the department’s pantomime. But what exactly is a tiger team? It is believed the term was coined in a 1964 Aerospace Reliability and Maintainability Conference paper to describe “a team of undomesticated and uninhibited technical specialists, selected for their experience, energy, and imagination, and assigned to track down relentlessly every possible source of failure in a spacecraft subsystem or simulation”.  

This sounds like a perfect team activity for a group of PhD students, although our project had less to do with hunting for flaws in spacecraft subsystems or simulations. Translating the original definition of a tiger team into the SCENARIO DTP activity, “Tiger Teams” is an opportunity for teams of PhD students to apply our skills to real-world challenges supplied by industrial partners.   

The project culminated in a visit to the Met Office to present our work.

Why did we sign up to Tiger Teams?  

In addition to a convincing pitch by our SCENARIO director, we thought that collaborating on a project in an unfamiliar area would be a great way to learn new skills from each other. The cross pollination of ideas and methods would not just be beneficial for our project, it may even help us with our individual PhD work.  

More generally, Tiger Teams was an opportunity to do something slightly different connected to research. Brainstorming ideas together for a specific real-life problem, maintaining a code repository as a group and giving team presentations were not the average experiences one could have as a PhD student. Even when, by chance, we get to collaborate with others, is it ever that different to our PhD? The sight of the same problems …. in the same area of work …everyday …. for months on end, can certainly get tiring. Dedicating one day per week on an unrelated, short-term project which will be completed within a few months helps to break the monotony of the mid-stage PhD blues. This is also much more indicative of how research is conducted in industry, where problems are solved collaboratively, and researchers with different talents are involved in multiple projects at once.

What did we do in this round’s Tiger Teams?  

One project was offered for this round of Tiger Teams: “Crowdsourced Data for Machine Learning Prediction of Urban Heat Wave Temperatures”. The bones of this project started during a machine learning hackathon at the Met Office and was later turned into a Tiger Teams proposal. Essentially, this project aimed to develop a machine learning model which would use amateur observations from the Met Offices Weather Observation Website (WOW), combined with landcover data, to fine-tune model outputs onto higher resolution grids.   

Having various backgrounds from environmental science, meteorology, physics and computer science, we were well equipped to carry out tasks formulated to predict urban heat wave temperatures. Some of the main components included:  

  • Quality control of data – as well as being more spatially dense, amateur observation stations are also more unreliable  
  • Feature selection – which inputs should we select to develop our ML models  
  • Error estimation and visualisation – How do we best assess and visualise the model performance  
  • Spatial predictions – Developing the tools to turn numerical weather prediction model outputs and high resolution landcover data into spatial temperature maps.  

Our supervisor for the project, Lewis Blunn, also provided many of the core ingredients to get this project to work, from retrieving and processing NWP data for our models, to developing a novel method for quantifying upstream land cover to be included in our machine learning models. 

An example of the spatial maps which our ML models can generate. Some key features of London are clearly visible, including the Thames and both Heathrow runways.

What were the deliverables?  

For most projects in industry, the team agrees with the customer (the industrial partner) on end-products to be produced before the conclusion of the project. Our two main deliverables were to (i) develop machine learning models that would predict urban heatwave temperatures across London and (ii) a presentation on our findings at the Met Office headquarters.  

By the end of the project, we had achieved both deliverables. Not only was our seminar at the Met Office attended by more than 120 staff, we also exchanged ideas with scientists from the Informatics Lab and briefly toured around the Met Office HQ and its operational centre. The models we developed as a team are in a shared Git repository, although we admit that we could still add a little more documentation for future development.  

As a bonus deliverable, our supervisor (and us) are consolidating our findings into a publishable paper. This is certainly a good deal considering our team effort in the past few months. Stay tuned for results from our paper perhaps in a future blog post!  

Panto 2022: FORTRANGLED 

Caleb Miller – c.s.miller@pgr.reading.ac.uk

Jen Stout – j.r.stout@pgr.reading.ac.uk

One of the biggest traditions in the Reading meteorology department is the yearly Christmas pantomime. Because of lockdowns and safety measures in the past several years due to the COVID-19 pandemic, 2022 was the first year to return to a live performance since 2019, and it was a lot of fun! 

FORTRANGLED Poster 

The panto this year was directed by Jen Stout and Caleb Miller. We were asked at the end of the summer by last year’s organizers if we would be interested in the roles of director, and we both agreed to it — most likely only because we didn’t realize how big of a task this would be! 

Plot 

The original idea for this plot was Jen’s idea. They suggested that we create a plot based on the story of Rapunzel, particularly on Disney’s adaptation in the movie Tangled. This turned out to be a well-loved story for many of the PhD students in the department, and when we met early in the autumn term to vote on a plot idea, Tangled won unanimously. 

It wasn’t long before we began to adapt the story to our own department and the field of meteorology. The original movie centers around the story of Rapunzel, a princess who was kidnapped at birth because of her hair’s special abilities, as she escapes the remote tower with the help of an outlaw. We quickly recognized the similarity between the original story and our own department’s move from the old Lyle building to the main Brian Hoskins building which was taking place at the time. 

The Lyle building was famously tall (with many, many stairs), and it was isolated from the majority of the department, much like Rapunzel’s tower. We even had someone to rescue the poor Lyle residents: head of department, Joy Singarayer! 

But who would take the spot of the villain, the woman who owned the tower and held Rapunzel there? Why not the Remote Sensing, Clouds, and Precipitation (“Radar”) research group? Jen and Caleb were both members (as were two of our supervisors), so we could make fun of ourselves, and the group had many members who were still in the Lyle building at the time. 

Soon, the story began to develop. Caleb wrote much of the initial draft and dialog, and several of the seasoned panto writers from last year stepped in and peppered the script with jokes and radar-related puns, much improving the final story! 

In the end, FORTRANGLED told the story of a young PhD student, Rapunzel, who wanted to use her invention, the Handheld Advanced Imaging Radar (“HAIR”), for in situ measurements on a weather balloon, but she is stopped by the Radar group. Thankfully, she is rescued from the lonely Lyle tower by Joy Singarayer, and finally she joins her original supervisor King Professor Sir Brian Hoskins and launches the balloon. 

Songs 

Of course, the panto wouldn’t be the same unless it featured popular songs with brand-new lyrics full of meteorology puns! We decided to use several of the songs from the Tangled film, while adding a few others where they fit.  

The band was headed by Flynn Ames. They began rehearsing months in advance, and their practice paid off enormously. The band performed excellent covers of a wide variety of musical genres and songs, featuring acoustic and electric guitars, bass, drums, keys, cello, and even a trombone! By the time we came to rehearse with the singers, they sounded incredible.  

As for the lyrics to the songs, Jen took charge with most of the writing — once it was realized that “Sheet Nimbostratus” sounded vaguely like “Pina Coladas,” the favourite “Sheet Nimbostratus (Escape)” song (a parody of Rupert Holmes’ Pina Colada Song) was written in under half an hour on a lunch break and ended up working well with theme of escape for the first act. 

Flynn was also a massive help with the songs, especially the last song of the show: Hall & Oates’ – “You Make My Dreams”. When it came to rehearsing the songs with the cast as singers, it was excellent having Flynn as someone who wasn’t rhythmically challenged to help us sort out when to sing the lyrics (as well as what words to sing and what notes to sing them to!), so thank you to Flynn, Beth, and the rest of the band for helping the rest of us sing as best as we could! 

Casting the lead 

Once the script was written, it was time to select the cast. Most of the casting was reasonably quick, but we had one issue: no one wanted to play the lead! Convincing a PhD student to pretend to be a princess in front of the entire department is understandably difficult. We spent at least a week going around the department trying to convince one another to step up for the role.  

However, the role had far too many lines for any one person to commit to, and therefore we settled on the “Rapunzel Roulette,” where a different person would play Rapunzel in each of the scenes. This ended up being a really good move, and meant that instead of the role being high-pressure, it was a rush of excitement and silliness for each act, especially as they had to pass the wig onto the next person before the next act started. 

The Night of the Panto 

The panto turned out to be a lot of fun! We sold over 130 tickets, and this was certainly one of the larger post-lockdown events at the department. Planning for the in-person event required no small amount of admin work, and we were especially helped by Dana Allen, Joy Singarayer, and Andrew Charlton-Perez! 

The event started at 6:30 with a bring-and-share buffet, and doors opened to Madejski Lecture Theatre at 7:30 before the show start. 

The FORTRANGLED cast 

We also had several interval acts, including the latest episode from John Shonk’s famous Mr. Mets series, Blair McGinness’s presentation on the controversial results of a department biscuit ranking tournament, and a musical performance from the faculty! 

After the interval acts, we resumed with the second act of the panto, and finished the result of months’ writing and rehearsals. The inclusion of the “Top Secret” notes and distribution of balloons was a last-minute inclusion, organized by Jen, intended to surprise the rest of the cast except for our excellent Narrator, Natalie, who was told beforehand in case everything went wrong!  

The instructions to the audience were as follows: “In the wilderness: If you see a duck; shout: quack! If you see a goose, shout: honk!,” as well as the extremely vague: “If the stage needs a balloon: please blow up your balloon and throw it towards stage!” 

Surprisingly, especially for a pantomime, the audience was incredibly well-behaved, balloons arrived exactly on cue (despite this not being written into the script whatsoever). As for the command to shout “honk” and “quack” when geese and ducks appeared… the honks went on for much longer than we expected, causing a lot of chaos and confusion both on and off stage! This was undoubtedly Jen’s favourite part of the entire show. 

After the party, we celebrated with an afterparty in the department coffee area led by DJ Shonk. This included some thematically appropriate piña coladas, which may have led to the scattering of the geese and ducks throughout the department… 

Reflections 

As the panto was the first in-person panto since 2019’s The Sonde of Music, and so most of the cast hadn’t seen a live pantomime in the way we did it this year! This made it a massive challenge to organise, and directing the Panto turned out to be a very difficult, but also very rewarding, task. Seeing everyone’s hard work come together on the night was the best part, and we’re glad we contributed to such a long-standing department tradition. 

We’d like to thank everyone who was involved: anyone we convinced to act, sing, play in the band, make props, put on silly outfits, organise the event, perform an interval act, or throw balloons at the stage. We found that this department is full of some very talented people, and it was really fun getting to work in some areas we don’t often get to see. If meteorology research is one day taken over by AI, the members in our department would have no problem finding new jobs on Broadway! 

AGU 2022 in the Windy City

Lauren James – l.a.james@pgr.reading.ac.uk

AGU Fall Meeting 2022 was held in Chicago, Illinois from 12th – 16th December, and I was fortunate to attend the conference in person to present a poster on my PhD research. At the post-pandemic event, 18,000 attendees were expected to be present throughout the week and more attended online. To date, this was going to be the largest audience to view my research.

No matter how many people tell you how huge the AGU meeting is, it is not until you walk into the venue you understand the extent of this conference. Rows upon rows of poster boards, a sizeable exhibition hall, an AGU centre and relaxation zone, and endless hallways to the seminar rooms. I went by the venue on Sunday afternoon to register and work out the main routes to and around the conference centre. I would recommend this to anyone attending as come Monday morning the registration queue was extraordinarily long, looping across bridges and down staircases. You would have missed any early morning talks you wanted to attend.

Tuesday morning was my allocated time to showcase my work. The poster sessions were 3.5 hours long, but the posters could be kept up on the board for the full day. Whilst there was no requirement to stand next to your poster for the full duration, I did just that as time flew by very quickly. Fellow scientists were eager to discuss the work, learn about new ideas, and find overlaps with their work. I brought along A4 printed versions of the poster (an idea I had picked up from another conference) and it was beneficial to either let attendees take away your work for reference or for allowing people at the back of a crowd to read the poster. For the online attendees, presenters could make an interactive poster (a.k.a iPosters) which was published on the online gallery. This platform allowed videos, gifs, and audio clips as well as no-limit to text in expandable text boxes. Whilst still being mindful of not overcrowding a poster, these additional features made the poster more accessible. For some fortunate presenters, digital poster rows at the conference allowed their iPosters to be viewable in person too. Thus, presenters could use the movies and audio to support their work as well as attendees could easily interact with their displays whilst unmanned. Further, there was no organisation for printing and travelling with a poster and produced no waste. Could this be the future of poster sessions?

Figure 1: An overlooking view of a section of the poster hall on the final day of the conference. The digital poster row can be seen on the closest row.
Figure 2: A picture of myself in front of my poster.

There were so many oral presentations throughout the week that are suitable for your field of research. With the help of the AGU app, I was able to make a schedule for the space physics sessions I wanted to attend and optimise my time at the conference by finding other sessions I would find interesting. This year, for the first time, there was a session on ‘Raising Awareness on Mental Health in the Earth and Space Sciences’. In the last few years, such sessions have become more widely available and I am happy to see that AGU has also taken the opportunity to discuss the importance of healthy work. Of the oral presentation sessions I attended, this one instigated a very engaged audience and highlighted the importance of interdisciplinary discussions. All the oral presentation sessions catered for in-person and online audiences, and have continued to allow online speakers to present and participate in the Q&A. These sessions are also still available to re-watch for all the attendees for a few weeks.

A walk amongst the exhibition hall filled some of the free time between sessions and allowed attendees to discuss careers with academic institutes and businesses working with instrumentation, programming, data accessibility, fieldwork and more. As a postgraduate student in space physics, it was initially overwhelming to see many stands that were advertising topics alien to me. But before you knew it, I had heard about a new state-of-the-art instrument that will rapidly transmit terabytes of data; learnt about ground aquifers by making an Oreo Ice Cream float; and collected a renowned NASA 2023 calendar.

From my understanding, there have been a few changes to the AGU meeting since pre-pandemic times. The colours of your lanyard corresponded to your comfort level of COVID-19 safety, spanning from ‘I need distance’ to ‘Air high fives approved’. Alcoholic refreshments during poster sessions were not provided as a conscious decision to improve attendee well-being and ensure the code of conduct is upheld. And the host city of the meeting will change annually within the US to improve the accessibility of the conference (although for a UK attendee, a long-haul flight is unavoidable regardless of this).

Figure 3: A few memories from my visit to Chicago, including the Oreo ice cream float, the cloud gate (a.k.a., The Bean), and the NHL Ice Hockey game at the Union Center.

Chicago was a lovely city to host this year. The conference centre was easily accessible by train and bus from the downtown area, and even walkable on a good weather day. We were fortunate to have rather pleasant weather throughout the visit, although some rain, snow, and a bitterly cold wind were experienced. Exploring the city was extra special this close to Christmas and aided the glory of city lights after sunset. In the evenings, there was ample time and things to do with early career scientists I’ve met throughout my time as a PhD student and newly made contacts from the US. Watching an NHL ice hockey match, visiting Navy Pier, a competitive evening at the bowling alley, and trying the famous deep dish pizza were just some of the things squeezed into the busy week.

There is no doubt that attending the AGU Fall Meeting has been a highlight in my PhD experience, and one that I would recommend to anyone who has the opportunity to visit in the future. Even if you’re travelling alone, which I did, there were ample opportunities to meet fellow attendees and experience a very enjoyable week in the city. I thank the University of Reading Graduate School for giving me a student travel bursary to help fund this international trip. Next year, this conference is being held in San Francisco, California from the 11th – 15th of December 2023.

Urban observations in Berlin

Martina Frid – m.a.h.frid@pgr.reading.ac.uk

Beth Saunders – bethany.saunders@pgr.reading.ac.uk

Introduction 

With a large (and growing) proportion of the global population living in cities, research undertaken in urban areas is important; especially in hazardous situations (heatwaves, flooding, etc), which become more severe and frequent due to climate change.  

This post gives an overview of recent work done for The urbisphere; a Synergy Project funded by the European Research Council (urbisphere 2021), aiming to forecast feedbacks between weather, climate and cities.  

Berlin Field Campaign 

The project has included a year-long field campaign (Autumn 2021 – Autumn 2022) undertaken in Berlin (Fig. 1). A smart Urban Observation System was used to take measurements across the city. Sensors used include ceilometers, Doppler wind LIDARs, radiometers, thermal cameras, and large aperture scintillometers (LAS). These measurements were taken to provide new information about the impact of Berlin (and other cities) on the urban boundary layer. The unique observation network was able to provide dense, multi-scale measurements, which will be used to evaluate and inform weather and climate models.  

Figure 1: Locations of the urbisphere senors in Berlin, Germany (urbisphere 2021).

Large Aperture Scintillometry in Berlin

The Berlin field campaign has included 6 LAS paths (Fig. 1). LAS paths consist of a transmitter and receiver mounted in the free atmosphere (Fig. 2), 0.5 – 5 km apart (e.g. Ward et al. 2014).

A beam of near-infrared radiation (wavelength of ~ 850 nm) is passed from the transmitter to receiver, where the beam intensity is measured. Changes in the refractive index of air are used to derive turbulent sensible heat flux. As the received intensity is the result of fluctuations all along the beam, derived quantities are spatially-integrated, and are therefore at a larger-scale compared to other flux measurement techniques (e.g. eddy-covariance).

Figure 2: One of six large aperture scintillometer path (orange) transects. Ground height (blue) is shown between the receiver site (GROP) and transmitter site (OSWE) in Berlin. The Path’s effective beam height is 50 m above ground level.

Our Visit to Berlin

During the first week of August, we travelled to Berlin for three days of fieldwork, to prepare for an intense observation period (IOP). This trip included us installing sensors, and testing they worked as expected. We visited three observation sites: GROP (123 m above sea level, Fig. 2), OSWE (63 m, Fig. 2) and NEUK (60 m).

One of the main purposes of this visit was to align two of the LAS paths (including the one in Fig. 2). Initially, work is undertaken at the transmitter site (Fig. 3, top) to point the instrument in the approximate direction of the receiver using a sight (Fig. 3, right hand side photographs).

At the receiver site (Fig. 3, bottom), the instrument’s measurement of signal strength can be displayed on a monitor in real time. Using this output as a guide, small adjustments to the receiver’s alignment are made by loosening or tightening two bolts on the mount; one which adjusts the receiver’s pitch, and one with adjusts the yaw. This was carried out until we reached a peak reading in signal strength, indicating the path was aligned.

Figure 3: Photographs of the large aperture scintillometer transmitter at site OSWE (top) and receiver at site GROP (bottom).

Our contribution to the IOP

Back in Reading, daily weather forecasts were carried out for the IOP, to determine when ground-based observations could be made. As the field campaign coincided with the central European heat wave, some of the highest temperatures were recorded during the IOP, and there was a need to forecast thunderstorm and the possibility of lightning strikes.

Ideal conditions for observations were clear skies and a consistent wind direction with height. A variety of different wind directions during the IOP was also preferable, to capture different transects of Berlin. For the selected days, group members in Berlin deployed multiple weather balloons simultaneously across multiple sites within the city and the outskirts. This was also timed with satellite overpasses. Observations of the mixing layer height (urban and suburban) were taken using a ceilometer mounted in a van, which drove along different transects of Berlin.

As the field campaign is wrapping up in Berlin, several instruments are now being moved to the new focus city: Paris. We are looking forward to this new period of interesting observations! Thank you and goodbye from us at the top of the GROP observation site!

References

urbisphere, 2021: Project Context and Objectives. http://urbisphere.eu/ (accessed 27/09/22)

Ward, H. C., J. G. Evans, and C. S. B. Grimmond, 2014: Multi-Scale Sensible Heat Fluxes in the Suburban Environment from Large-Aperture Scintillometry and Eddy Covariance. Boundary-Layer Meteorol., 152, 65–89.

Deploying an Instrument to the Reading University Atmospheric Observatory 

Caleb Miller – c.s.miller@pgr.reading.ac.uk 

In the Reading area, December and January seem to be prime fog season. Since I’m studying the effects of fog on atmospheric electricity, that means that winter is data collection season! However, in order to begin collecting data in the first year of my PhD, there was only a short amount of time to prepare an instrument and deploy it to the observatory before Christmas. 

One of the instruments that I am using to measure fog is called the Optical Cloud Sensor (OCS). It was designed by Giles Harrison and Keri Nicoll, and it is described in more detail in this paper: (Harrison and Nicoll 2014). The OCS has four channels of LEDs which shine light into the surrounding air. When fog is present, the fog droplets scatter light back to the instrument, where the intensity from each channel can be measured. 

Powering the instrument 

The OCS was originally designed to be flown on a weather balloon, which meant that it was meant to be powered by battery and run for only short periods of time. In my case, however, I wanted the device to be able to continuously collect data over a period of weeks or months without interruption. Then, we would be able to catch any fog events, even if they hadn’t been forecasted. That meant the device would need to be powered by the +15V power supply available at the observatory, and my first step was to create a power adapter for the OCS so that this would be possible. 

Initially, I had been considering using an Arduino microcontroller as a datalogger, so I decided to put together a power adapter on an Arduino shield (a small electronic platform) for maximum convenience. I included multiple voltage levels on my power adapter and connected them to different power inputs on the OCS. Once this was completed, the entire system could now be powered with a single power supply that was available at the observatory! 

I was able to find all of the required parts for the power supply in stock in the laboratory in the Meteorology Department, and I soldered it together in a few days. The technical staff of the university were very helpful in this process! A photograph of the power adapter connected to an Arduino is shown in Figure 1. 

Figure 1. The power adapter for the optical cloud sensor, built on an Arduino shield 

Storing data from the instrument 

Once the power supply had been created, the next step was setting up a datalogging system. On a balloon, the data would be streamed in real-time down to a ground station by radio link. But when this system was deployed to the ground, that would no longer be necessary. 

Instead, I decided to use a CR1000X datalogger from Campbell Scientific. This system has a number of voltage inputs which can be programmed using a graphical interface over a USB connection, and it has a port for an SD card. I programmed the datalogger to sample each of the four analog channels coming from the OCS every five seconds and to store the measurements on an SD card. Collecting the measurements was then as simple as removing the SD card from the datalogger and copying the data to my laptop. This could be done without interrupting the datalogger, as it has its own internal storage, and it would continue measuring while the SD card was removed. 

I had also considered simultaneously logging a digital form of the measurements to an Arduino in addition to the analog measurements made by the datalogger. This would give us two redundant logging systems which would decrease the chances of losing valuable information in the event of an instrument malfunction. However, due to a shortage of time and a technical issue with the instrument’s digital channels, I was unable to prepare the Arduino logger by the time we were ready to deploy the OCS, so we used only the analog datalogger. 

Figure 2. The OCS with its new power supply being tested in the laboratory 

Deploying the instrument 

Once the power supply and datalogger were completed, the instrument was ready to be deployed! It was a fairly simple process to get approval to put the instrument in the observatory; then I met with Ian Read to find a suitable location to set up the OCS. There were several posts in the observatory which were free, and I chose one which was close to the temperature and humidity sensors in the hopes that the conditions would be fairly similar in those locations. Once everything was ready, the technicians and I took the OCS and datalogger and set it up in the field site. At first, when we powered it on, nothing happened. Apparently, one of the solder joints on my power adapter had been damaged when I carried it across campus. However, I resoldered that connection with advice from the university technical staff, and it worked beautifully! 

Figure 3. The datalogger inside its enclosure in the observatory 

Figure 4. The OCS attached to its post in the observatory  

Except for a short period of maintenance in January, the OCS has been running continuously from December until May, and it has already captured quite a few fog events! With the data from the OCS, I now have an additional resource to use in analyzing fog. The levels of light backscattered from the four channels of the instrument provide interesting information, which I am combining with electrical and visibility measurements to analyze the microphysical properties of fog development. 

Hopefully, over the next year, we will be able to measure many more fog events with this instrument that will help us to better understand fog! 

Harrison, R. G., and K. A. Nicoll, 2014: Note: Active optical detection of cloud from a balloon platform. Rev. Sci. Instrum., 85, 066104, https://doi.org/10.1063/1.4882318. 

Ensemble Data Assimilation with auto-correlated model error

Haonan Ren – h.ren@pgr.reading.ac.uk

Data assimilation is a mathematical method to combine forecasts with observations, in order to improve the accuracy of the original forecast. Normally data assimilation methods are performed with the perfect-model assumption (weak-constraint). However, there are different sources that can produce model error, such as missing description of the dynamic system and numerical discretisation. Therefore, in recent years, the model error has been more often considered in the data assimilation process (strong-constraint settings). 

There are several data assimilation methods applied in various fields. My PhD project mainly focuses on the ensemble/Monte Carlo formulation of the Kalman Filter-based methods, more specifically, the ensemble Kalman Smoother (EnKS). Different from the filter, a smoother updates the state of the system using observations from the past, present and possibly the future. The smoother does not only improve the forecast at the observation time, instead, it updates the whole simulation period. 

The main purpose of my research is to investigate the performance of the data assimilation methods with auto-correlated model error. We want to know what will happen if we propose a misspecified auto-correlation in the model error, for both state update and parameter estimation. We start our project with a very simple linear auto-regressive model. As for the auto-correlation in model error, we propose an exponential decaying decorrelation. Naturally, the system has a exponential decaying parameter ωr, and the parameter we use in the forecast and data assimilation is ωg which can be different from the real one. 

A simple example can illuminate the decorrelation issue. In Figure 1, we show results of a smoothing process for a simple one-dimensional system over a time window of 20 nature time steps. We use an ensemble Kalman Smoother with two different observation densities in time. The memories in the nature model and the forecasts models do not coincide. We can see that with ωr = 0.0, when the actual model error is a white-in-time random variable, the evolution of the true state of the system behaves rather erratically with the present model settings. If we do not know the memory and assume the model error is a bias in the data assimilation process (ωg → ∞), the estimation made by the data assimilation method is not even close to the truth, even with very dense observations in the simulation period, as shown in the left two subplots in Figure 1. On the other hand, if the model error in the true model evolution behaves like a bias, and we assume that the model error is white in time in the data assimilation process, the results are quite different with different observation frequencies. As shown in two subplots on the right in Figure 1, with very frequent observations, we can see a fairly good performance of the data assimilation process, but with a single observation, the estimation is still not accurate. 

Figure 1: Plots of the trajectories of the true state of the system (black), three posterior ensemble members (pink, randomly chosen from 200 members), and the posterior ensemble mean (red). The left subplots show results for a true white noise model error and an assumed bias model error for two observation densities. Note that the posterior estimates are poor in both cases. The right subplots depict a bias true model error and an assumed white noise model error. The result with one observation is poor, while if many observations are present the assimilation result is consistent within the ensemble spread.

In order to evaluate the performance of the EnKS, we need to compare the root-mean-square error (RMSE) with the ensemble spread of the posterior. The best performance of the EnKS is when ratio of RMSE over the spread is equal to 1.0. The results are shown in Figure 2. As we can see, the Kalman Smoother works well when ωg ωr for all the cases, with the ratio of RMSE over the spread equal to 1.0. With relatively high observational frequency, 5 observations or more in the simulation window, the RMSE is larger than the spread when ωg > ωr, and vice versa. In a further investigation, the mismatch between the two timescales ωr and ωg barely has any impact on the RMSE. The ratio is dominated by the ensemble spread.

Figure 2: Ratio of MSE over the posterior variance for the 1-dimensional system, calculated using numerical evaluation of the exact analytical expressions. The different panels show results for different numbers of observations.

Then, we move to estimating the parameter encoded in the auto-correlation of the model error. We estimate the exponential decaying parameter by the state augmentation using the EnKS, and the results are shown in Figure 3. Instead of the exponential parameter, ωg, we use the log scale of the memory timescale to avoid negative memory estimates. The initial log-timescale values are drawn from a normal distribution: ln ωgi ∈ N (ln ωg, 1.0). Hence we assume that the prior distribution of the memory time scale is lognormal distributed. According to Figure 3, with an increasing number of windows we obtain better estimates. Also, the convergence is faster with more observations. And in some cases the solution did not converge to the correct value. This is not surprising given the highly nonlinear character of the parameter estimation problem, especially with only one observation per window. When we observe every time step the convergence is much faster, and the variance in the estimate decreases, as shown in the lower two subplots. In this case we always found fast convergence with different first guess and true timescale combinations, demonstrating that more observations bring us closer to the truth, and hence make the parameter estimation problem more linear.

Figure 3: PDFs of the prior (blue) and posterior (reddish colours) estimated ωg, using an increasing number of assimilation windows. The different panels show results for different observation densities and prior mean larger (a, c) or smaller (b, d) than ωr. The vertical black line denotes the true value ωr.

As the results of the experiments show, the influence of an incorrect decorrelation timescale in the model error can be significant. We found that when the observation density is high, state augmentation is sufficient to obtain converging results. The new element is that online estimation is possible beyond a relatively simple bias estimate of the model error.

As a next step we will explore the influence of incorrectly specified model errors in nonlinear systems, and a more complex auto-correlation in the model error.

Communicating uncertainties associated with anthropogenic climate change

Email: j.f.talib@pgr.reading.ac.uk

This week Prof. Ed Hawkins from the Department of Meteorology and NCAS-Climate gave a University of Reading public lecture discussing the science of climate change. A plethora of research was presented, all highlighting that humans are changing our climate. As scientists we can study the greenhouse effect in scientific labs, observe increasing temperatures across the majority of the planet, or simulate the impact of human actions on the Earth’s climate through using climate models.

simulating_temperature_rise
Figure 1. Global-mean surface temperature in observations (solid black line), and climate model simulations with (red shading) and without (blue shading) human actions. Shown during Prof. Ed Hawkins’ University of Reading Public Lecture.

Fig. 1, presented in Ed Hawkins’ lecture, shows the global mean temperature rise associated with human activities. Two sets of climate simulations have been performed to produce this plot. The first set, shown in blue, are simulations controlled solely by natural forcings, i.e. variations in radiation from the sun and volcanic eruptions. The second, shown in red, are simulations which include both natural forcing and forcing associated with greenhouse gas emissions from human activities. The shading indicates the spread amongst climate models, whilst the observed global-mean temperature is shown by the solid black line. From this plot it is evident that all climate models attribute the rising temperatures over the 20th and 21st century to human activity. Climate simulations without greenhouse gas emissions from human activity indicate a much smaller rise, if any, in global-mean temperature.

However, whilst there is much agreement amongst climate scientists and climate models that our planet is warming due to human activity, understanding the local impact of anthropogenic climate change contains its uncertainties.

For example, my PhD research aims to understand what controls the location and intensity of the Intertropical Convergence Zone. The Intertropical Convergence Zone is a discontinuous, zonal precipitation band in the tropics that migrates meridionally over the seasonal cycle (see Fig. 2). The Intertropical Convergence Zone is associated with wet and dry seasons over Africa, the development of the South Asian Monsoon and the life-cycle of tropical cyclones. However, currently our climate models struggle to simulate characteristics of the Intertropical Convergence Zone. This, alongside other issues, results in climate models differing in the response of tropical precipitation to anthropogenic climate change.

animation
Figure 2. Animation showing the seasonal cycle of the observed monthly-mean precipitation rates between 1979-2014.

Figure 3 is a plot taken from a report written by the Intergovernmental Panel on Climate Change (Climate Change 2013: The Physical Science Basis). Both maps show the projected change from climate model simulations in Northern Hemisphere winter precipitation between the years 2016 to 2035 (left) and 2081 to 2100 (right) relative to 1986 to 2005 under a scenario where minimal action is taken to limit greenhouse gas emissions (RCP8.5) . Whilst the projected changes in precipitation are an interesting topic in their own right, I’d like to draw your attention to the lines and dots annotated on each map. The lines indicate where the majority of climate models agree on a small change. The map on the left indicates that most climate models agree on small changes in precipitation over the majority of the globe over the next two decades. Dots, meanwhile, indicate where climate models agree on a substantial change in Northern Hemisphere winter precipitation. The plot on the right indicates that across the tropics there are substantial areas where models disagree on changes in tropical precipitation due to anthropogenic climate change. Over the majority of Africa, South America and the Maritime Continent, models disagree on the future of precipitation due to climate change.

IPCC_plot
Figure 3. Changes in Northern Hemisphere Winter Precipitation between 2016 to 2035 (left) and 2081 to 2100 (right) relative to 1986 to 2005 under a scenario with minimal reduction in anthropogenic greenhouse gas emission. Taken from IPCC – Climate Change 2013: The Physical Science Basis.

How should scientists present these uncertainties?

I must confess that I am nowhere near an expert in communicating uncertainties, however I hope some of my thoughts will encourage a discussion amongst scientists and users of climate data. Here are some of the ideas I’ve picked up on during my PhD and thoughts associated with them:

  • Climate model average – Take the average amongst climate model simulations. With this method though you take the risk of smoothing out large positive and negative trends. The climate model average is also not a “true” projection of changes due to anthropogenic climate change.
  • Every climate model outcome – Show the range of climate model projections to the user. Here you face the risk of presenting the user with too much climate data. The user may also trust certain model outputs which suit their own agenda.
  • Storylines – This idea was first shown to me in a paper by Zappa, G. and Shepherd, T. G., (2017). You present a series of storylines in which you highlight the key processes that are associated with variability in the regional weather pattern of interest. Each change in the set of processes leads to a different climate model projection. However, once again, the user of the climate model data has to reach their own conclusion on which projection to take action on.
  • Probabilities with climate projections – Typically with short- and medium-range weather forecasts probabilities are used to support the user. These probabilities are generated by re-performing the simulations, each with either different initial conditions or a slight change in model physics, to see the percentage of simulations that agree on model output. However, with climate model simulations, it is slightly more difficult to associate probabilities with projections. How do you generate the probabilities? Climate models have similarities in the methods which they use to represent the physics of our atmosphere and therefore you don’t want the probabilities associated with each climate projection due to similarity amongst climate model set-up. You could base the probabilities on how well the climate model simulates the past, however just because a model simulates the past correctly, doesn’t mean it will correctly simulate the forcing in the future.

There is much more that can be said about communicating uncertainty among climate model projections – a challenge which will continue for several decades. As climate scientists we can sometimes fall into the trap on concentrating on uncertainties. We need to keep on presenting the work that we are confident about, to ensure that the right action is taken to mitigate against anthropogenic climate change.

The Role of the Cloud Radiative Effect in the Sensitivity of the Intertropical Convergence Zone to Convective Mixing

Email: j.f.talib@pgr.reading.ac.uk

Talib, J., S.J. Woolnough, N.P. Klingaman, and C.E. Holloway, 2018: The Role of the Cloud Radiative Effect in the Sensitivity of the Intertropical Convergence Zone to Convective Mixing. J. Climate, 31, 6821–6838, https://doi.org/10.1175/JCLI-D-17-0794.1

Rainfall in the tropics is commonly associated with the Intertropical Convergence Zone (ITCZ), a discontinuous line of convergence collocated at the ascending branch of the Hadley circulation, where strong moist convection leads to high rainfall. What controls the location and intensity of the ITCZ remains a fundamental question in climate science.

ensemble_precip_neat_thesis
Figure 1: Annual-mean, zonal-mean tropical precipitation (mm day-1) from Global Precipitation Climatology Project (GPCP, observations, solid black line) and CMIP5 (current coupled models) output. Dashed line indicates CMIP5 ensemble mean.

In current and previous generations of climate models, the ITCZ is too intense in the Southern Hemisphere, resulting in two annual-mean, zonal-mean tropical precipitation maxima, one in each hemisphere (Figure 1).  Even if we take the same atmospheric models and couple them to a world with only an ocean surface (aquaplanets) with prescribed sea surface temperatues (SSTs), different models simulate different ITCZs (Blackburn et al., 2013).

Within a climate model parameterisations are used to replace processes that are too small-scale or complex to be physically represented in the model. Parameterisation schemes are used to simulate a variety of processes including processes within the boundary layer, radiative fluxes and atmospheric chemistry. However my work, along with a plethora of others, shows that the representation of the ITCZ is sensitive to the convective parameterisation scheme (Figure 2a). The convective parameterisation scheme simulates the life cycle of clouds within a model grid-box.

Our method of showing that the simulated ITCZ is sensitive to the convective parameterisation scheme is by altering the convective mixing rate in prescribed-SST aquaplanet simulations. The convective mixing rate determines the amount of mixing a convective parcel has with the environmental air, therefore the greater the convective mixing rate, the quicker a convective parcel will become similar to the environmental air, given fixed convective parcel properties.

AEIprecipCREon
Figure 2: Zonal-mean, time-mean (a) precipitation rates (mm day-1}$) and (b) AEI (W m-2) in simulations where the convective mixing rate is varied.

In our study, the structure of the simulated ITCZ is sensitive to the convective mixing rate. Low convective mixing rates simulate a double ITCZ (two precipitation maxima, orange and red lines in Figure 2a), and high convective mixing rates simulate a single ITCZ (blue and black lines).

We then associate these ITCZ structures to the atmospheric energy input (AEI). The AEI is the amount of energy left in the atmosphere once considering the top of the atmosphere and surface energy budgets. We conclude, similar to Bischoff and Schneider, 2016, that when the AEI is positive (negative) at the equator, a single (double) ITCZ is simulated (Figure 2b). When the AEI is negative at the equator, energy is needed to be transported towards the equator for equilibrium. From a mean circulation perspective, this take place in a double ITCZ scenario (Figure 3). A positive AEI at the equator, is associated with poleward energy transport and a single ITCZ.

blog_figure_ITCZ_simulation
Figure 3: Schematic of a single (left) and double ITCZ (right). Blue arrows denote energy transport. In a single ITCZ scenario more energy is transported in the upper branches of the Hadley circulation, resulting in a net-poleward energy transport. In a double ITCZ scenario, more energy is transport equatorward than poleward at low latitudes, leading to an equatorward energy transport.

In our paper, we use this association between the AEI and ITCZ to hypothesize that without the cloud radiative effect (CRE), atmospheric heating due to cloud-radiation interactions, a double ITCZ will be simulated. We also hypothesize that prescribing the CRE will reduce the sensitivity of the ITCZ to convective mixing, as simulated AEI changes are predominately due to CRE changes.

In the rest of the paper we perform simulations with the CRE removed and prescribed to explore further the role of the CRE in the sensitivity of the ITCZ. We conclude that when removing the CRE a double ITCZ becomes more favourable and in both sets of simulations the ITCZ is less sensitive to convective mixing. The remaining sensitivity is associated with latent heat flux alterations.

My future work following this publication explores the role of coupling in the sensitivity of the ITCZ to the convective parameterisation scheme. Prescribing the SSTs implies an arbitary ocean heat transport, however in the real world the ocean heat transport is sensitive to the atmospheric circulation. Does this sensitivity between the ocean heat transport and atmospheric circulation affect the sensitivity of the ITCZ to convective mixing?

Thanks to my funders, SCENARIO NERC DTP, and supervisors for their support for this project.

References:

Blackburn, M. et al., (2013). The Aqua-planet Experiment (APE): Control SST simulation. J. Meteo. Soc. Japan. Ser. II, 91, 17–56.

Bischoff, T. and Schneider, T. (2016). The Equatorial Energy Balance, ITCZ Position, and Double-ITCZ Bifurcations. J. Climate., 29(8), 2997–3013, and Corrigendum, 29(19), 7167–7167.