International Conferences on Subseasonal to Decadal Prediction

I was recently fortunate enough to attend the International Conferences on Subseasonal to Decadal Prediction in Boulder, Colorado. This was a week-long event organised by the World Climate Research Programme (WCRP) and was a joint meeting with two conferences taking place simultaneously: the Second International Conference on Subseasonal to Seasonal Prediction (S2S) and the Second International Conference on Seasonal to Decadal Prediction (S2D). There were also joint sessions addressing common issues surrounding prediction on these timescales.

Weather and climate variations on subseasonal to seasonal (from around 2 weeks to a season) to decadal timescales can have enormous social, economic, and environmental impacts, making skillful predictions on these timescales a valuable tool for policymakers. As a result, there is an increasingly large interest within the scientific and operational forecasting communities in developing forecasts to improve our ability to predict severe weather events. On S2S timescales, these include high-impact meteorological events such as tropical cyclones, floods, droughts, and heat and cold waves. On S2D timescales, while the focus broadly remains on similar events (such as precipitation and surface temperatures), deciphering the roles of internal and externally-forced variability in forecasts also becomes important.

IMG_6994.HEIC
Attendees of the International Conferences on Subseasonal to Decadal Prediction

The conferences were attended by nearly 350 people, of which 92 were Early Career Scientists (either current PhD students or those who completed their PhD within the last 5-7 years), from 38 different countries. There were both oral and poster presentations on a wide variety of topics, including mechanisms of S2S and S2D predictability (e.g. the stratosphere and tropical-extratropical teleconnections) and current modelling issues in S2S and S2D prediction. I was fortunate to be able to give an oral presentation about some of my recently published work, in which we examine the performance of the ECMWF seasonal forecast model at representing a teleconnection mechanism which links Indian monsoon precipitation to weather and climate variations across the Northern Hemisphere. After my talk I spoke to several other people who are working on similar topics, which was very beneficial and helped give me ideas for analysis that I could carry out as part of my own research.

One of the best things about attending an international conference is the networking opportunities that it presents, both with people you already know and with potential future collaborators from other institutions. This conference was no exception, and as well as lunch and coffee breaks there was an Early Career Scientists evening meal. This gave me a chance to meet scientists from all over the world who are at a similar stage of their career to myself.

IMG_5205
The view from the NCAR Mesa Lab

Boulder is located at the foot of the Rocky Mountains, so after the conference I took the opportunity to do some hiking on a few of the many trails that lead out from the city. I also took a trip up to NCAR’s Mesa Lab, which is located up the hillside away from the city and has spectacular views across Boulder and the high plains of Colorado, as well as a visitor centre with meteorological exhibits. It was a great experience to attend this conference and I am very grateful to NERC and the SummerTIME project for funding my travel and accommodation.

Email: j.beverley@pgr.reading.ac.uk

Modelling windstorm losses in a climate model

Extratropical cyclones cause vast amounts of damage across Europe throughout the winter seasons. The damage from these cyclones mainly comes from the associated severe winds. The most intense cyclones have gusts of over 200 kilometres per hour, resulting in substantial damage to property and forestry, for example, the Great Storm of 1987 uprooted approximately 15 million trees in one night. The average loss from these storms is over $2 billion per year (Schwierz et al. 2010) and is second only to Atlantic Hurricanes globally in terms of insured losses from natural hazards. However, the most severe cyclones such as Lothar (26/12/1999) and Kyrill (18/1/2007) can cause losses in excess of $10 billion (Munich Re, 2016). One property of extratropical cyclones is that they have a tendency to cluster (to arrive in groups – see example in Figure 1), and in such cases these impacts can be greatly increased. For example Windstorm Lothar was followed just one day later by Windstorm Martin and the two storms combined caused losses of over $15 billion. The large-scale atmospheric dynamics associated with clustering events have been discussed in a previous blog post and also in the scientific literature (Pinto et al., 2014; Priestley et al. 2017).

Picture1
Figure 1. Composite visible satellite image from 11 February 2014 of 4 extratropical cyclones over the North Atlantic (circled) (NASA).

A large part of my PhD has involved investigating exactly how important the clustering of cyclones is on losses across Europe during the winter. In order to do this, I have used 918 years of high resolution coupled climate model data from HiGEM (Shaffrey et al., 2017) which provides a huge amount of winter seasons and cyclone events for analysis.

In order to understand how clustering affects losses, I first of all need to know how much loss/damage is associated with each individual cyclone. This is done using a measure called the Storm Severity Index (SSI – Leckebusch et al., 2008), which is a proxy for losses that is based on the 10-metre wind field of the cyclone events. The SSI is a good proxy for windstorm loss. Firstly, it scales the wind speed in any particular location by the 98th percentile of the wind speed climatology in that location. This scaling ensures that only the most severe winds at any one point are considered, as different locations have different perspectives on what would be classed as ‘damaging’. This exceedance above the 98th percentile is then raised to the power of 3 due to damage from wind being a highly non-linear function. Finally, we apply a population density weighting to our calculations. This weighting is required because a hypothetical gust of 40 m/s across London will cause considerably more damage than the same gust across far northern Scandinavia, and the population density is a good approximation for the density of insured property. An example of the SSI that has been calculated for Windstorm Lothar is shown in Figure 2.

 

figure_2_blog_2018_new
Figure 2. (a) Wind footprint of Windstorm Lothar (25-27/12/1999) – 10 metre wind speed in coloured contours (m/s). Black line is the track of Lothar with points every 6 hours (black dots). (b) The SSI field of Windstorm Lothar. All data from ERA-Interim.

 

From Figure 2b you can see how most of the damage from Windstorm Lothar was concentrated across central/northern France and also across southern Germany. This is because the winds here were most extreme relative to what is the climatology. Even though the winds are highest across the North Atlantic Ocean, the lack of insured property, and a much high climatological winter mean wind speed, means that we do not observe losses/damage from Windstorm Lothar in these locations.

figure_3_blog_2018_new
Figure 3. The average SSI for 918 years of HiGEM data.

 

I can apply the SSI to all of the individual cyclone events in HiGEM and therefore can construct a climatology of where windstorm losses occur. Figure 3 shows the average loss across all 918 years of HiGEM. You can see that the losses are concentrated in a band from southern UK towards Poland in an easterly direction. This mainly covers the countries of Great Britain, Belgium, The Netherlands, France, Germany, and Denmark.

This blog post introduces my methodology of calculating and investigating the losses associated with the winter season extratropical cyclones. Work in Priestley et al. (2018) uses this methodology to investigate the role of clustering on winter windstorm losses.

This work has been funded by the SCENARIO NERC DTP and also co-sponsored by Aon Benfield.

 

Email: m.d.k.priestley@pgr.reading.ac.uk

 

References

Leckebusch, G. C., Renggli, D., and Ulbrich, U. 2008. Development and application of an objective storm severity measure for the Northeast Atlantic region. Meteorologische Zeitschrift. https://doi.org/10.1127/0941-2948/2008/0323.

Munich Re. 2016. Loss events in Europe 1980 – 2015. 10 costliest winter storms ordered by overall losses. https://www.munichre.com/touch/naturalhazards/en/natcatservice/significant-natural-catastrophes/index.html

Pinto, J. G., Gómara, I., Masato, G., Dacre, H. F., Woollings, T., and Caballero, R. 2014. Large-scale dynamics associated with clustering of extratropical cyclones affecting Western Europe. Journal of Geophysical Research: Atmospheres. https://doi.org/10.1002/2014JD022305.

Priestley, M. D. K., Dacre, H. F., Shaffrey, L. C., Hodges, K. I., and Pinto, J. G. 2018. The role of European windstorm clustering for extreme seasonal losses as determined from a high resolution climate model, Nat. Hazards Earth Syst. Sci. Discuss., https://doi.org/10.5194/nhess-2018-165, in review.

Priestley, M. D. K., Pinto, J. G., Dacre, H. F., and Shaffrey, L. C. 2017. Rossby wave breaking, the upper level jet, and serial clustering of extratropical cyclones in western Europe. Geophysical Research Letters. https://doi.org/10.1002/2016GL071277.

Schwierz, C., Köllner-Heck, P., Zenklusen Mutter, E. et al. 2010. Modelling European winter wind storm losses in current and future climate. Climatic Change. https://doi.org/10.1007/s10584-009-9712-1.

Shaffrey, L. C., Hodson, D., Robson, J., Stevens, D., Hawkins, E., Polo, I., Stevens, I., Sutton, R. T., Lister, G., Iwi, A., et al. 2017. Decadal predictions with the HiGEM high resolution global coupled climate model: description and basic evaluation, Climate Dynamics, https://doi.org/10.1007/s00382-016-3075-x.

Advice for students starting their PhD

The Meteorology Department at Reading has just welcomed its new cohort of PhD students, so we gathered some pearls of wisdom for the years ahead:

photo-1529007196863-d07650a3f0ea

“Start good habits from the beginning; decide how you will make notes on papers, and how you will save papers, and where you will write notes, and how you will save files. Create a spreadsheet of where your code is, and what it does, and what figure it creates. It will save you so much time.”

“Write down everything you do in note form; this helps you a) track back your progress if you take time out from research and b) makes writing your thesis a LOT easier…”

“Pick your supervisor carefully. Don’t kid yourself that they will be different as a PhD supervisor; look for someone understanding and supportive.”

“Expect the work to progress slowly at first, things will not all work out simply.”

“Don’t give up! And don’t be afraid to ask for help from other PhDs or other staff members (in addition to your supervisors).”

“Don’t compare yourself to other PhDs, and make sure to take some time off, you’re allowed a holiday!”

“Ask for help all the time.”

“Keep a diary of the work you do each day so you remember exactly what you’ve done 6 months later.”

“Don’t worry if your supervisors/people in years above seem to know everything, or can do things really easily. There hasn’t been an administrative cock-up, you’re not an impostor: everyone’s been there. Also, get into a good routine. It really helps.”

“Talk to your supervisor about what both of you expect and decide how often you want to meet at the beginning. This will make things easier.”

“Don’t compare with other students. A PhD is an individual project with its own aims and targets. Everyone will get to completion on their own journey.”

“You’ll be lost but achieving something. You can’t see it yet.”

The Many Speak Of Computer

Knowing multiple languages can be hard. As any polyglot will tell you, there are many difficulties that can come from mixing and matching languages; losing vocabulary in both, only being able to think and speak in one at a time, having to remember to apply the correct spelling and pronunciation conventions in the correct contexts.

Humans aren’t the only ones who experience these types of multiple-language issues, however. Computers can also suffer from linguistic problems pertaining to the “programming languages” humans use to communicate with them, as well as the more hidden, arcane languages they use to speak to one another. This can cause untold frustration to their users. Dealing with seemingly arbitrary computing issues while doing science, we humans, especially if we aren’t computing experts, can get stuck in a mire with no conceivable way out.

photo-1526374965328-7f61d4dc18c5
Caught in the Matrix…

Problems with programming languages are the easiest problems of this nature to solve. Often the mistake is the human in question lacking the necessary vocabulary, or syntax, and the problem can be solved with a quick peruse of google or stack exchange to find someone with a solution. Humans are much better at communicating and expressing ideas in native human languages than computational ones. They often encounter the same problems as one another and describe them in similar ways. It is not uncommon to overhear a programmer lamenting: “But I know what I mean!” So looking for another human to act as a ‘translator’ can be very effective.

Otherwise, it’s a problem with the programming language itself; the language’s syntax is poorly defined, or doesn’t include provision for certain concepts or ideas to be expressed. Imagine trying to describe the taste of a lemon in a language which doesn’t possess words for ‘bitter’ or ‘sour’. At best these problems can be solved by installing some kind of library, or package, where someone else has written a work-around and you can piggy-back off of that effort. Like learning vocabulary from a new dialect. At worst you have to write these things yourself, and if you’re kind, and write good code, you will share them with the community; you’re telling the people down the pub that you’ve decided that the taste of lemons, fizzy colas, and Flanders red is “sour”.

photo-1531618548726-b21ac4df8121
Describe a lemon without using “bitter” or “sour”

There is, however, a more insidious and undermining class of problems, pertaining to the aforementioned arcane computer-only languages. These languages, more aptly called “machine code”, are the incomprehensible languages computers and different parts of a computer use to communicate with one another.

For many programming languages known as “compiled languages”, the computer must ‘compile’ the code written by a human into machine code which it then executes, running the program. This is generally a good thing; it helps debug errors before potentially disastrous code is run on a machine, it significantly improves performance as computers don’t need to translate code on the fly line-by-line. But there is a catch.

There is no one single machine code. And unless a computer both knows the language an executable is written in, and is able to speak it, then tough tomatoes, it can’t run that code.

This is fine for code you have written and compiled yourself, but when importing code from elsewhere it can cause tough to diagnose problems. Especially on the large computational infrastructures used in scientific computing, with many computers that might not all speak the same languages. In a discipline like meteorology, with a large legacy codebase, and where use of certain libraries is assumed, not knowing how to execute pre-compiled code will leave the hopeful researcher in a rut. Especially in cases where access to the source code of a library is restricted due to it being a commercial product. You know there’s a dialect that has the words the computer needs to express itself, and you have a set of dictionaries, but you don’t know any of the languages and they’re all completely unintelligible; which dictionary do you use?

photo-1532522750741-628fde798c73
All unintelligible?

So what can you do? Attempt to find alternative codebases. Write them yourself. Often, however, we stand on the shoulders of giants, and having to do so would be prohibitive. Ask your institution’s computing team for help – but they don’t always know the answers.

There are solutions we can employ in our day to day coding practices that can help. Clear documentation when writing code, as well as maintaining clear style guides can make a world of difference in attempting to diagnose problems that are machine-related as opposed to code-related. Keeping a repository of functions and procedures for oneself, even if it is not shared with the community, can also be a boon. You can’t see that a language doesn’t have a word for a concept unless you own a dictionary. Sometimes, pulling apart the ‘black box’-like libraries and packages we acquire from the internet, or our supervisors, or other scientists, is important in verifying that code does what we expect it to.

At the end of the day, you are not expected to be an expert in machine architecture. This is one of the many reasons why it is important to be nice to your academic computing team. If you experience issues of compilers not working on your institution’s computers, or executables of libraries not running it isn’t your job to fix it and you shouldn’t feel bad if it holds your project up. Read some papers, concentrate on some other work, work on your lit-review if you’re committed to continuing to do work. Personally, I took a holiday.

I have struggled with these problems and the solution has been to go to my PhD’s partner institution where we know the code works! Perhaps this is a sign that these problems can be extremely non-trivial, and are not to be underestimated.

Ahh well. It’s better than being a monoglot, at least.

Faster analysis of large datasets in Python

Have you ever run into a memory error or thought your function is taking too long to run? Here are a few tips on how to tackle these issues.

In meteorology we often have to analyse large datasets, which can be time consuming and/or lead to memory errors. While the netCDF4, numpy and pandas packages in Python provide great tools for our data analysis, there are other packages we can use, that parallelize our code: joblib, xarray and dask (view links for documentation and references for further reading). This means that the input data is split between the different cores of the computer and our analysis of different bits of data runs in parallel, rather than one after the other, speeding up the process. At the end the data is collected and returned to us in the same form as before, but now it was done faster. One of the basic ideas behind the parallelization is the ‘divide and conquer’ algorithm [Fig. 1] (see, e.g., Cormen et al. 2009, or Wikipedia for brief introduction), which finds the best possible (fastest) route for calculating the data and return it.

divide_and_conquer
Figure 1: A simple example of the ‘divide and conquer’ algorithm for sorting a list of numbers. First the list is split into simpler subproblems, that are then solved (sorted) and merged to a final sorted array. Source

The simplest module we can use is joblib. This module can be easily implemented for for-loops (see an example here): e.g. the operation that needs to be executed 1000 times, can be split between 40 cores that your computer has, making the calculation that much faster. Note that often Python modules include optimized routines, and we can avoid for-loops entirely, which is usually a faster option.

The xarray module provides tools for opening and saving multiple netCDF-type (though not limited to this) datasets, which can then be analysed either as numpy arrays or dask arrays. If we choose to use the dask arrays (also available via dask module), any command we use on the array will be calculated in parallel automatically via a type of ‘divide and conquer’ algorithm. Note that this on its own does not help us avoid a memory error as the data eventually has to be loaded in the memory (potentially using a for-loop on these xarray/dask arrays can speed-up the calculation). There are often also options to run your data on high-memory nodes, and the larger the dataset the more time we save through parallelization.

In the end it really depends on how much time you are willing to spend on learning about these arrays and whether it is worth the extra effort – I had to use them as they resolved my memory issues and sped up the code. It is certainly worth keeping this option in mind!

Getting started with xarray/dask

In the terminal window:

  • Use a system with conda installed (e.g. anaconda)
  • To start a bash shell type: bash
  • Create a new python environment (e.g. ‘my_env’) locally, so you can install custom packages. Give it a list of packages:
    • conda create -n my_env xarray
  • Then activate the new python environment (Make sure that you are in ‘my_env’ when using xarray):
    • source activate my_env
  • If you need to install any other packages that you need, you can add them later (via conda install), or you could list them with xarray when you create the environment:
    • conda install scipy pandas numpy dask matplotlib joblib #etc.
  • If the following paths are not ‘unset’ then you need to unset them (check this with command: conda info -a):
    • unset PYTHONPATH PYTHONHOME LD_LIBRARY_PATH
  • In python you can then simply import xarray, numpy or dask modules:
    • import xarray as xr; import dask.array as da; import numpy as np; from joblib import Parallel, delayed; # etc.
  • Now you can easily import datasets [e.g.: dataset = xr.open_dataset() from one file or dataset = xr.open_mfdataset() from multiple files; similarly dataset.to_netcdf() to save to one netcdf file or xr.save_mfdataset() to save to multiple netcdf files] and manipulate them using dask and xarray modules – documentation for these can be found in the links above and references below.
  • Once you open a dataset, you can access data either by loading it into memory (xarray data array: dataset.varname.values) and further analyzing it as before using numpy package (which will not run in parallel); or you can access data through the dask array (xarray dask array: dataset.varname.data), which will not load the data in the memory (it will create the best possible path to executing the operation) until you wish to save the data to a file or plot it. The latter can be analysed in a similar way as the well-known numpy arrays, but instead using the dask module [e.g. numpy.mean (array,axis=0) in dask becomes dask.array.mean (dask_array,axis=0)]. Many functions exist in xarray module as well, meaning you can run them on the dataset itself rather than the array [e.g. dataset.mean(dim=’time’) is equivalent to the mean in dask or numpy].
  • Caution: If you try to do too many operations on the array the ‘divide and conquer’ algorithm will become so complex that the programme will not be able to manage it. Therefore, it is best to calculate everything step-by-step, by using dask_array.compute() or dask_array.persist(). Another issue I find with these new array-modulesis that they are slow when it comes to saving the data on disk (i.e. not any faster than other modules).

I would like to thank Shannon Mason and Peter Gabrovšek for their helpful advice and suggestions.

References

Cormen, T.H., C.E. Leiserson, R.L. Rivest, C. Stein, 2009: An introduction to algorithms. MIT press, third edition, 1312 pp.

Dask Development Team, 2016: Dask: Library for dynamic task scheduling. URL: http://dask.pydata.org

Hoyer, S. & Hamman, J., 2017. xarray: N-D labeled Arrays and Datasets in Python. Journal of Open Research Software. 5, p.10. DOI: http://doi.org/10.5334/jors.148

Hoyer, S., C. Fitzgerald, J. Hamman and others, 2016: xarray: v0.8.0. DOI: http://dx.doi.org/10.5281/zenodo.59499

Rocklin, M., 2016: Dask: Parallel Computation with Blocked algorithms and Task Scheduling. Proceedings of the 14th Python in Science Conference (K. Huff and J. Bergstra, eds.), 130 – 136.

Evidence week, or why I chatted to politicians about evidence.

Email: a.w.bateson@pgr.reading.ac.uk

Twitter: @a_w_bateson

On a sunny Tuesday morning at 8.30 am I found myself passing through security to enter the Palace of Westminster. The home of the MPs and peers is not obvious territory for a PhD student. However, I was here as a Voice of Young Science (VoYS) volunteer for the Sense about Science Evidence WeekSense about Science in an independent charity that aims to scrutinise the use of evidence in the public domain and to challenge misleading or misrepresented science. I have written previously here about attending one of their workshops about peer review, and also here about contributing to a campaign aiming to assess the transparency of evidence used in government policy documents.

The purpose of evidence week was to bring together MPs, peers, parliamentary services and people from all walks of life to generate a conversation about why evidence in policy-making matters. The week was held in collaboration with the House of Commons Library, Parliamentary Office of Science and Technology and House of Commons Science and Technology Committee, in partnership with SAGE Publishing. Individual events and briefings were contributed to by further organisations including the Royal Statistical Society, Alliance for Useful Evidence and UCL. Each day had a different theme to focus on including ‘questioning quality’ and ‘wicked problems’ i.e. superficially simple problems which turn out to be complex and multifaceted.

DgiC0YjX4AEtZeH
Throughout the week both MPs, parliamentary staff and the public were welcomed to a stand in the Upper Waiting Hall to have conversations about why evidence is important to them. Photo credit to Sense about Science.

Throughout the parliamentary week, which lasts from Monday to Thursday, Sense about Science had a stand in the Upper Waiting Hall of Parliament. This location is right outside committee rooms where members of the public will give evidence to one of the many select committees. These are collections of MPs from multiple parties whose role it is to oversee the work of government departments and agencies, though their role in gathering evidence and scrutiny can sometimes have significance beyond just UK policy-making (for example this story documenting one committee’s role in investigating the relationship between Facebook, Cambridge Analytica and the propagation of ‘fake news’). The aim of this stand was to catch the attention of both the public, parliamentary staff, and MPs, and to engage them in conversations about the importance of evidence. Alongside the stand, a series of events and briefings were held within Parliament on the topic of evidence. Titles included ‘making informed decisions about health care’ and ‘it ain’t necessarily so… simple stories can go wrong’.

Each day brought a new set of VoYS volunteers to the campaign, both to attend to the stand and to document and help out with the various events during the week. Hence I found myself abandoning my own research for a day to contribute to Day 2 of the campaign, focusing on navigating data and statistics. I had a busy day; beyond chatting to people at the stand I also took over the VoYS Twitter account to document some of the day’s key events, attended a briefing about the 2021 census, and provided a video roundup for the day (which can be viewed here!). For conversations that we had at the stand we were asked to particularly focus on questions in line with the theme of the day including ‘if a statistic is the answer, what was the question?’ and ‘where does this data come from?’

DgoFUzoXkAAVW8m
MP for Bath, Wera Hobhouse, had a particular interest in the pollution data for her constituency and the evidence for the most effective methods to improve air quality.  Photo credit to Sense about Science.

Trying to engage people at the stand proved to be challenging; the location of the stand meant people passing by were often in a rush to committee meetings. Occasionally the division bells, announcing a parliamentary vote, would also ring and a rush of MPs would flock by, great for trying to spot the more well-known MPs but less good for convincing them to stop to talk about data and statistics. In practice this meant I and other VoYS members had to adopt a very assertive approach in talking to people, a style that is generally not within the comfort zone of most scientists! However this did lead to some very interesting conversations, including with a paediatric surgeon who was advocating to the health select committee for increasing the investment in research to treat tumours in children. He posed a very interesting question: given a finite amount of funding for tumour research, how much of this should be specifically directed towards improving the survival outcomes of younger patients and how much to older patients? We also asked MPs and members of the public to add any evidence questions they had to the stand. A member of the public wondered, ‘are there incentives to show what doesn’t work?’ and Layla Moran, MP for Oxford West and Abingdon, asked ‘how can politicians better understand uncertainty in data?’

DgitoU7W4AEvohS
Visitors to the stand, including MPs and Peers, were asked to add any burning questions they had about evidence to the stand. Photo credit to Sense about Science.

The week proved to be a success. Over 60 MPs from across parliamentary parties, including government ministers, interacted with some aspect of evidence week, accounting for around 10% of the total number of MPs. Also, a wider audience who engaged with the stand included parliamentary staff and members of the public. Sense about Science highlighted two outcomes after the event: one was the opening event where members of various community groups met with over 40 MPs and peers and had the opportunity to explain why evidence was important to them, whether their interest was in beekeeping, safe standing at football matches or IVF treatment; the second was the concluding round-table event regarding what people require from evidence gathering. SAGE will publish an overview of this round-table as a discussion paper in Autumn.

On a personal level, I had a very valuable experience. Firstly, it was great opportunity to visit somewhere as imposing and important as the Houses of Parliament and to contribute to such an exciting and innovate week. I was able to have some very interesting conversations with both MPs and members of the public. I found that generally everybody was enthusiastic about the need for increased use and transparency of evidence in policy-making. The challenge, instead, is to ensure that both policy-makers and the general public have the tools they need to collect, assess and apply evidence.

The Role of the Cloud Radiative Effect in the Sensitivity of the Intertropical Convergence Zone to Convective Mixing

Email: j.f.talib@pgr.reading.ac.uk

Talib, J., S.J. Woolnough, N.P. Klingaman, and C.E. Holloway, 2018: The Role of the Cloud Radiative Effect in the Sensitivity of the Intertropical Convergence Zone to Convective Mixing. J. Climate, 31, 6821–6838, https://doi.org/10.1175/JCLI-D-17-0794.1

Rainfall in the tropics is commonly associated with the Intertropical Convergence Zone (ITCZ), a discontinuous line of convergence collocated at the ascending branch of the Hadley circulation, where strong moist convection leads to high rainfall. What controls the location and intensity of the ITCZ remains a fundamental question in climate science.

ensemble_precip_neat_thesis
Figure 1: Annual-mean, zonal-mean tropical precipitation (mm day-1) from Global Precipitation Climatology Project (GPCP, observations, solid black line) and CMIP5 (current coupled models) output. Dashed line indicates CMIP5 ensemble mean.

In current and previous generations of climate models, the ITCZ is too intense in the Southern Hemisphere, resulting in two annual-mean, zonal-mean tropical precipitation maxima, one in each hemisphere (Figure 1).  Even if we take the same atmospheric models and couple them to a world with only an ocean surface (aquaplanets) with prescribed sea surface temperatues (SSTs), different models simulate different ITCZs (Blackburn et al., 2013).

Within a climate model parameterisations are used to replace processes that are too small-scale or complex to be physically represented in the model. Parameterisation schemes are used to simulate a variety of processes including processes within the boundary layer, radiative fluxes and atmospheric chemistry. However my work, along with a plethora of others, shows that the representation of the ITCZ is sensitive to the convective parameterisation scheme (Figure 2a). The convective parameterisation scheme simulates the life cycle of clouds within a model grid-box.

Our method of showing that the simulated ITCZ is sensitive to the convective parameterisation scheme is by altering the convective mixing rate in prescribed-SST aquaplanet simulations. The convective mixing rate determines the amount of mixing a convective parcel has with the environmental air, therefore the greater the convective mixing rate, the quicker a convective parcel will become similar to the environmental air, given fixed convective parcel properties.

AEIprecipCREon
Figure 2: Zonal-mean, time-mean (a) precipitation rates (mm day-1}$) and (b) AEI (W m-2) in simulations where the convective mixing rate is varied.

In our study, the structure of the simulated ITCZ is sensitive to the convective mixing rate. Low convective mixing rates simulate a double ITCZ (two precipitation maxima, orange and red lines in Figure 2a), and high convective mixing rates simulate a single ITCZ (blue and black lines).

We then associate these ITCZ structures to the atmospheric energy input (AEI). The AEI is the amount of energy left in the atmosphere once considering the top of the atmosphere and surface energy budgets. We conclude, similar to Bischoff and Schneider, 2016, that when the AEI is positive (negative) at the equator, a single (double) ITCZ is simulated (Figure 2b). When the AEI is negative at the equator, energy is needed to be transported towards the equator for equilibrium. From a mean circulation perspective, this take place in a double ITCZ scenario (Figure 3). A positive AEI at the equator, is associated with poleward energy transport and a single ITCZ.

blog_figure_ITCZ_simulation
Figure 3: Schematic of a single (left) and double ITCZ (right). Blue arrows denote energy transport. In a single ITCZ scenario more energy is transported in the upper branches of the Hadley circulation, resulting in a net-poleward energy transport. In a double ITCZ scenario, more energy is transport equatorward than poleward at low latitudes, leading to an equatorward energy transport.

In our paper, we use this association between the AEI and ITCZ to hypothesize that without the cloud radiative effect (CRE), atmospheric heating due to cloud-radiation interactions, a double ITCZ will be simulated. We also hypothesize that prescribing the CRE will reduce the sensitivity of the ITCZ to convective mixing, as simulated AEI changes are predominately due to CRE changes.

In the rest of the paper we perform simulations with the CRE removed and prescribed to explore further the role of the CRE in the sensitivity of the ITCZ. We conclude that when removing the CRE a double ITCZ becomes more favourable and in both sets of simulations the ITCZ is less sensitive to convective mixing. The remaining sensitivity is associated with latent heat flux alterations.

My future work following this publication explores the role of coupling in the sensitivity of the ITCZ to the convective parameterisation scheme. Prescribing the SSTs implies an arbitary ocean heat transport, however in the real world the ocean heat transport is sensitive to the atmospheric circulation. Does this sensitivity between the ocean heat transport and atmospheric circulation affect the sensitivity of the ITCZ to convective mixing?

Thanks to my funders, SCENARIO NERC DTP, and supervisors for their support for this project.

References:

Blackburn, M. et al., (2013). The Aqua-planet Experiment (APE): Control SST simulation. J. Meteo. Soc. Japan. Ser. II, 91, 17–56.

Bischoff, T. and Schneider, T. (2016). The Equatorial Energy Balance, ITCZ Position, and Double-ITCZ Bifurcations. J. Climate., 29(8), 2997–3013, and Corrigendum, 29(19), 7167–7167.