Advice for students starting their PhD

The Meteorology Department at Reading has just welcomed its new cohort of PhD students, so we gathered some pearls of wisdom for the years ahead:

photo-1529007196863-d07650a3f0ea

“Start good habits from the beginning; decide how you will make notes on papers, and how you will save papers, and where you will write notes, and how you will save files. Create a spreadsheet of where your code is, and what it does, and what figure it creates. It will save you so much time.”

“Write down everything you do in note form; this helps you a) track back your progress if you take time out from research and b) makes writing your thesis a LOT easier…”

“Pick your supervisor carefully. Don’t kid yourself that they will be different as a PhD supervisor; look for someone understanding and supportive.”

“Expect the work to progress slowly at first, things will not all work out simply.”

“Don’t give up! And don’t be afraid to ask for help from other PhDs or other staff members (in addition to your supervisors).”

“Don’t compare yourself to other PhDs, and make sure to take some time off, you’re allowed a holiday!”

“Ask for help all the time.”

“Keep a diary of the work you do each day so you remember exactly what you’ve done 6 months later.”

“Don’t worry if your supervisors/people in years above seem to know everything, or can do things really easily. There hasn’t been an administrative cock-up, you’re not an impostor: everyone’s been there. Also, get into a good routine. It really helps.”

“Talk to your supervisor about what both of you expect and decide how often you want to meet at the beginning. This will make things easier.”

“Don’t compare with other students. A PhD is an individual project with its own aims and targets. Everyone will get to completion on their own journey.”

“You’ll be lost but achieving something. You can’t see it yet.”

The Many Speak Of Computer

Knowing multiple languages can be hard. As any polyglot will tell you, there are many difficulties that can come from mixing and matching languages; losing vocabulary in both, only being able to think and speak in one at a time, having to remember to apply the correct spelling and pronunciation conventions in the correct contexts.

Humans aren’t the only ones who experience these types of multiple-language issues, however. Computers can also suffer from linguistic problems pertaining to the “programming languages” humans use to communicate with them, as well as the more hidden, arcane languages they use to speak to one another. This can cause untold frustration to their users. Dealing with seemingly arbitrary computing issues while doing science, we humans, especially if we aren’t computing experts, can get stuck in a mire with no conceivable way out.

photo-1526374965328-7f61d4dc18c5
Caught in the Matrix…

Problems with programming languages are the easiest problems of this nature to solve. Often the mistake is the human in question lacking the necessary vocabulary, or syntax, and the problem can be solved with a quick peruse of google or stack exchange to find someone with a solution. Humans are much better at communicating and expressing ideas in native human languages than computational ones. They often encounter the same problems as one another and describe them in similar ways. It is not uncommon to overhear a programmer lamenting: “But I know what I mean!” So looking for another human to act as a ‘translator’ can be very effective.

Otherwise, it’s a problem with the programming language itself; the language’s syntax is poorly defined, or doesn’t include provision for certain concepts or ideas to be expressed. Imagine trying to describe the taste of a lemon in a language which doesn’t possess words for ‘bitter’ or ‘sour’. At best these problems can be solved by installing some kind of library, or package, where someone else has written a work-around and you can piggy-back off of that effort. Like learning vocabulary from a new dialect. At worst you have to write these things yourself, and if you’re kind, and write good code, you will share them with the community; you’re telling the people down the pub that you’ve decided that the taste of lemons, fizzy colas, and Flanders red is “sour”.

photo-1531618548726-b21ac4df8121
Describe a lemon without using “bitter” or “sour”

There is, however, a more insidious and undermining class of problems, pertaining to the aforementioned arcane computer-only languages. These languages, more aptly called “machine code”, are the incomprehensible languages computers and different parts of a computer use to communicate with one another.

For many programming languages known as “compiled languages”, the computer must ‘compile’ the code written by a human into machine code which it then executes, running the program. This is generally a good thing; it helps debug errors before potentially disastrous code is run on a machine, it significantly improves performance as computers don’t need to translate code on the fly line-by-line. But there is a catch.

There is no one single machine code. And unless a computer both knows the language an executable is written in, and is able to speak it, then tough tomatoes, it can’t run that code.

This is fine for code you have written and compiled yourself, but when importing code from elsewhere it can cause tough to diagnose problems. Especially on the large computational infrastructures used in scientific computing, with many computers that might not all speak the same languages. In a discipline like meteorology, with a large legacy codebase, and where use of certain libraries is assumed, not knowing how to execute pre-compiled code will leave the hopeful researcher in a rut. Especially in cases where access to the source code of a library is restricted due to it being a commercial product. You know there’s a dialect that has the words the computer needs to express itself, and you have a set of dictionaries, but you don’t know any of the languages and they’re all completely unintelligible; which dictionary do you use?

photo-1532522750741-628fde798c73
All unintelligible?

So what can you do? Attempt to find alternative codebases. Write them yourself. Often, however, we stand on the shoulders of giants, and having to do so would be prohibitive. Ask your institution’s computing team for help – but they don’t always know the answers.

There are solutions we can employ in our day to day coding practices that can help. Clear documentation when writing code, as well as maintaining clear style guides can make a world of difference in attempting to diagnose problems that are machine-related as opposed to code-related. Keeping a repository of functions and procedures for oneself, even if it is not shared with the community, can also be a boon. You can’t see that a language doesn’t have a word for a concept unless you own a dictionary. Sometimes, pulling apart the ‘black box’-like libraries and packages we acquire from the internet, or our supervisors, or other scientists, is important in verifying that code does what we expect it to.

At the end of the day, you are not expected to be an expert in machine architecture. This is one of the many reasons why it is important to be nice to your academic computing team. If you experience issues of compilers not working on your institution’s computers, or executables of libraries not running it isn’t your job to fix it and you shouldn’t feel bad if it holds your project up. Read some papers, concentrate on some other work, work on your lit-review if you’re committed to continuing to do work. Personally, I took a holiday.

I have struggled with these problems and the solution has been to go to my PhD’s partner institution where we know the code works! Perhaps this is a sign that these problems can be extremely non-trivial, and are not to be underestimated.

Ahh well. It’s better than being a monoglot, at least.

Faster analysis of large datasets in Python

Have you ever run into a memory error or thought your function is taking too long to run? Here are a few tips on how to tackle these issues.

In meteorology we often have to analyse large datasets, which can be time consuming and/or lead to memory errors. While the netCDF4, numpy and pandas packages in Python provide great tools for our data analysis, there are other packages we can use, that parallelize our code: joblib, xarray and dask (view links for documentation and references for further reading). This means that the input data is split between the different cores of the computer and our analysis of different bits of data runs in parallel, rather than one after the other, speeding up the process. At the end the data is collected and returned to us in the same form as before, but now it was done faster. One of the basic ideas behind the parallelization is the ‘divide and conquer’ algorithm [Fig. 1] (see, e.g., Cormen et al. 2009, or Wikipedia for brief introduction), which finds the best possible (fastest) route for calculating the data and return it.

divide_and_conquer
Figure 1: A simple example of the ‘divide and conquer’ algorithm for sorting a list of numbers. First the list is split into simpler subproblems, that are then solved (sorted) and merged to a final sorted array. Source

The simplest module we can use is joblib. This module can be easily implemented for for-loops (see an example here): e.g. the operation that needs to be executed 1000 times, can be split between 40 cores that your computer has, making the calculation that much faster. Note that often Python modules include optimized routines, and we can avoid for-loops entirely, which is usually a faster option.

The xarray module provides tools for opening and saving multiple netCDF-type (though not limited to this) datasets, which can then be analysed either as numpy arrays or dask arrays. If we choose to use the dask arrays (also available via dask module), any command we use on the array will be calculated in parallel automatically via a type of ‘divide and conquer’ algorithm. Note that this on its own does not help us avoid a memory error as the data eventually has to be loaded in the memory (potentially using a for-loop on these xarray/dask arrays can speed-up the calculation). There are often also options to run your data on high-memory nodes, and the larger the dataset the more time we save through parallelization.

In the end it really depends on how much time you are willing to spend on learning about these arrays and whether it is worth the extra effort – I had to use them as they resolved my memory issues and sped up the code. It is certainly worth keeping this option in mind!

Getting started with xarray/dask

In the terminal window:

  • Use a system with conda installed (e.g. anaconda)
  • To start a bash shell type: bash
  • Create a new python environment (e.g. ‘my_env’) locally, so you can install custom packages. Give it a list of packages:
    • conda create -n my_env xarray
  • Then activate the new python environment (Make sure that you are in ‘my_env’ when using xarray):
    • source activate my_env
  • If you need to install any other packages that you need, you can add them later (via conda install), or you could list them with xarray when you create the environment:
    • conda install scipy pandas numpy dask matplotlib joblib #etc.
  • If the following paths are not ‘unset’ then you need to unset them (check this with command: conda info -a):
    • unset PYTHONPATH PYTHONHOME LD_LIBRARY_PATH
  • In python you can then simply import xarray, numpy or dask modules:
    • import xarray as xr; import dask.array as da; import numpy as np; from joblib import Parallel, delayed; # etc.
  • Now you can easily import datasets [e.g.: dataset = xr.open_dataset() from one file or dataset = xr.open_mfdataset() from multiple files; similarly dataset.to_netcdf() to save to one netcdf file or xr.save_mfdataset() to save to multiple netcdf files] and manipulate them using dask and xarray modules – documentation for these can be found in the links above and references below.
  • Once you open a dataset, you can access data either by loading it into memory (xarray data array: dataset.varname.values) and further analyzing it as before using numpy package (which will not run in parallel); or you can access data through the dask array (xarray dask array: dataset.varname.data), which will not load the data in the memory (it will create the best possible path to executing the operation) until you wish to save the data to a file or plot it. The latter can be analysed in a similar way as the well-known numpy arrays, but instead using the dask module [e.g. numpy.mean (array,axis=0) in dask becomes dask.array.mean (dask_array,axis=0)]. Many functions exist in xarray module as well, meaning you can run them on the dataset itself rather than the array [e.g. dataset.mean(dim=’time’) is equivalent to the mean in dask or numpy].
  • Caution: If you try to do too many operations on the array the ‘divide and conquer’ algorithm will become so complex that the programme will not be able to manage it. Therefore, it is best to calculate everything step-by-step, by using dask_array.compute() or dask_array.persist(). Another issue I find with these new array-modulesis that they are slow when it comes to saving the data on disk (i.e. not any faster than other modules).

I would like to thank Shannon Mason and Peter Gabrovšek for their helpful advice and suggestions.

References

Cormen, T.H., C.E. Leiserson, R.L. Rivest, C. Stein, 2009: An introduction to algorithms. MIT press, third edition, 1312 pp.

Dask Development Team, 2016: Dask: Library for dynamic task scheduling. URL: http://dask.pydata.org

Hoyer, S. & Hamman, J., 2017. xarray: N-D labeled Arrays and Datasets in Python. Journal of Open Research Software. 5, p.10. DOI: http://doi.org/10.5334/jors.148

Hoyer, S., C. Fitzgerald, J. Hamman and others, 2016: xarray: v0.8.0. DOI: http://dx.doi.org/10.5281/zenodo.59499

Rocklin, M., 2016: Dask: Parallel Computation with Blocked algorithms and Task Scheduling. Proceedings of the 14th Python in Science Conference (K. Huff and J. Bergstra, eds.), 130 – 136.

Evidence week, or why I chatted to politicians about evidence.

Email: a.w.bateson@pgr.reading.ac.uk

Twitter: @a_w_bateson

On a sunny Tuesday morning at 8.30 am I found myself passing through security to enter the Palace of Westminster. The home of the MPs and peers is not obvious territory for a PhD student. However, I was here as a Voice of Young Science (VoYS) volunteer for the Sense about Science Evidence WeekSense about Science in an independent charity that aims to scrutinise the use of evidence in the public domain and to challenge misleading or misrepresented science. I have written previously here about attending one of their workshops about peer review, and also here about contributing to a campaign aiming to assess the transparency of evidence used in government policy documents.

The purpose of evidence week was to bring together MPs, peers, parliamentary services and people from all walks of life to generate a conversation about why evidence in policy-making matters. The week was held in collaboration with the House of Commons Library, Parliamentary Office of Science and Technology and House of Commons Science and Technology Committee, in partnership with SAGE Publishing. Individual events and briefings were contributed to by further organisations including the Royal Statistical Society, Alliance for Useful Evidence and UCL. Each day had a different theme to focus on including ‘questioning quality’ and ‘wicked problems’ i.e. superficially simple problems which turn out to be complex and multifaceted.

DgiC0YjX4AEtZeH
Throughout the week both MPs, parliamentary staff and the public were welcomed to a stand in the Upper Waiting Hall to have conversations about why evidence is important to them. Photo credit to Sense about Science.

Throughout the parliamentary week, which lasts from Monday to Thursday, Sense about Science had a stand in the Upper Waiting Hall of Parliament. This location is right outside committee rooms where members of the public will give evidence to one of the many select committees. These are collections of MPs from multiple parties whose role it is to oversee the work of government departments and agencies, though their role in gathering evidence and scrutiny can sometimes have significance beyond just UK policy-making (for example this story documenting one committee’s role in investigating the relationship between Facebook, Cambridge Analytica and the propagation of ‘fake news’). The aim of this stand was to catch the attention of both the public, parliamentary staff, and MPs, and to engage them in conversations about the importance of evidence. Alongside the stand, a series of events and briefings were held within Parliament on the topic of evidence. Titles included ‘making informed decisions about health care’ and ‘it ain’t necessarily so… simple stories can go wrong’.

Each day brought a new set of VoYS volunteers to the campaign, both to attend to the stand and to document and help out with the various events during the week. Hence I found myself abandoning my own research for a day to contribute to Day 2 of the campaign, focusing on navigating data and statistics. I had a busy day; beyond chatting to people at the stand I also took over the VoYS Twitter account to document some of the day’s key events, attended a briefing about the 2021 census, and provided a video roundup for the day (which can be viewed here!). For conversations that we had at the stand we were asked to particularly focus on questions in line with the theme of the day including ‘if a statistic is the answer, what was the question?’ and ‘where does this data come from?’

DgoFUzoXkAAVW8m
MP for Bath, Wera Hobhouse, had a particular interest in the pollution data for her constituency and the evidence for the most effective methods to improve air quality.  Photo credit to Sense about Science.

Trying to engage people at the stand proved to be challenging; the location of the stand meant people passing by were often in a rush to committee meetings. Occasionally the division bells, announcing a parliamentary vote, would also ring and a rush of MPs would flock by, great for trying to spot the more well-known MPs but less good for convincing them to stop to talk about data and statistics. In practice this meant I and other VoYS members had to adopt a very assertive approach in talking to people, a style that is generally not within the comfort zone of most scientists! However this did lead to some very interesting conversations, including with a paediatric surgeon who was advocating to the health select committee for increasing the investment in research to treat tumours in children. He posed a very interesting question: given a finite amount of funding for tumour research, how much of this should be specifically directed towards improving the survival outcomes of younger patients and how much to older patients? We also asked MPs and members of the public to add any evidence questions they had to the stand. A member of the public wondered, ‘are there incentives to show what doesn’t work?’ and Layla Moran, MP for Oxford West and Abingdon, asked ‘how can politicians better understand uncertainty in data?’

DgitoU7W4AEvohS
Visitors to the stand, including MPs and Peers, were asked to add any burning questions they had about evidence to the stand. Photo credit to Sense about Science.

The week proved to be a success. Over 60 MPs from across parliamentary parties, including government ministers, interacted with some aspect of evidence week, accounting for around 10% of the total number of MPs. Also, a wider audience who engaged with the stand included parliamentary staff and members of the public. Sense about Science highlighted two outcomes after the event: one was the opening event where members of various community groups met with over 40 MPs and peers and had the opportunity to explain why evidence was important to them, whether their interest was in beekeeping, safe standing at football matches or IVF treatment; the second was the concluding round-table event regarding what people require from evidence gathering. SAGE will publish an overview of this round-table as a discussion paper in Autumn.

On a personal level, I had a very valuable experience. Firstly, it was great opportunity to visit somewhere as imposing and important as the Houses of Parliament and to contribute to such an exciting and innovate week. I was able to have some very interesting conversations with both MPs and members of the public. I found that generally everybody was enthusiastic about the need for increased use and transparency of evidence in policy-making. The challenge, instead, is to ensure that both policy-makers and the general public have the tools they need to collect, assess and apply evidence.

The Role of the Cloud Radiative Effect in the Sensitivity of the Intertropical Convergence Zone to Convective Mixing

Email: j.f.talib@pgr.reading.ac.uk

Talib, J., S.J. Woolnough, N.P. Klingaman, and C.E. Holloway, 2018: The Role of the Cloud Radiative Effect in the Sensitivity of the Intertropical Convergence Zone to Convective Mixing. J. Climate, 31, 6821–6838, https://doi.org/10.1175/JCLI-D-17-0794.1

Rainfall in the tropics is commonly associated with the Intertropical Convergence Zone (ITCZ), a discontinuous line of convergence collocated at the ascending branch of the Hadley circulation, where strong moist convection leads to high rainfall. What controls the location and intensity of the ITCZ remains a fundamental question in climate science.

ensemble_precip_neat_thesis
Figure 1: Annual-mean, zonal-mean tropical precipitation (mm day-1) from Global Precipitation Climatology Project (GPCP, observations, solid black line) and CMIP5 (current coupled models) output. Dashed line indicates CMIP5 ensemble mean.

In current and previous generations of climate models, the ITCZ is too intense in the Southern Hemisphere, resulting in two annual-mean, zonal-mean tropical precipitation maxima, one in each hemisphere (Figure 1).  Even if we take the same atmospheric models and couple them to a world with only an ocean surface (aquaplanets) with prescribed sea surface temperatues (SSTs), different models simulate different ITCZs (Blackburn et al., 2013).

Within a climate model parameterisations are used to replace processes that are too small-scale or complex to be physically represented in the model. Parameterisation schemes are used to simulate a variety of processes including processes within the boundary layer, radiative fluxes and atmospheric chemistry. However my work, along with a plethora of others, shows that the representation of the ITCZ is sensitive to the convective parameterisation scheme (Figure 2a). The convective parameterisation scheme simulates the life cycle of clouds within a model grid-box.

Our method of showing that the simulated ITCZ is sensitive to the convective parameterisation scheme is by altering the convective mixing rate in prescribed-SST aquaplanet simulations. The convective mixing rate determines the amount of mixing a convective parcel has with the environmental air, therefore the greater the convective mixing rate, the quicker a convective parcel will become similar to the environmental air, given fixed convective parcel properties.

AEIprecipCREon
Figure 2: Zonal-mean, time-mean (a) precipitation rates (mm day-1}$) and (b) AEI (W m-2) in simulations where the convective mixing rate is varied.

In our study, the structure of the simulated ITCZ is sensitive to the convective mixing rate. Low convective mixing rates simulate a double ITCZ (two precipitation maxima, orange and red lines in Figure 2a), and high convective mixing rates simulate a single ITCZ (blue and black lines).

We then associate these ITCZ structures to the atmospheric energy input (AEI). The AEI is the amount of energy left in the atmosphere once considering the top of the atmosphere and surface energy budgets. We conclude, similar to Bischoff and Schneider, 2016, that when the AEI is positive (negative) at the equator, a single (double) ITCZ is simulated (Figure 2b). When the AEI is negative at the equator, energy is needed to be transported towards the equator for equilibrium. From a mean circulation perspective, this take place in a double ITCZ scenario (Figure 3). A positive AEI at the equator, is associated with poleward energy transport and a single ITCZ.

blog_figure_ITCZ_simulation
Figure 3: Schematic of a single (left) and double ITCZ (right). Blue arrows denote energy transport. In a single ITCZ scenario more energy is transported in the upper branches of the Hadley circulation, resulting in a net-poleward energy transport. In a double ITCZ scenario, more energy is transport equatorward than poleward at low latitudes, leading to an equatorward energy transport.

In our paper, we use this association between the AEI and ITCZ to hypothesize that without the cloud radiative effect (CRE), atmospheric heating due to cloud-radiation interactions, a double ITCZ will be simulated. We also hypothesize that prescribing the CRE will reduce the sensitivity of the ITCZ to convective mixing, as simulated AEI changes are predominately due to CRE changes.

In the rest of the paper we perform simulations with the CRE removed and prescribed to explore further the role of the CRE in the sensitivity of the ITCZ. We conclude that when removing the CRE a double ITCZ becomes more favourable and in both sets of simulations the ITCZ is less sensitive to convective mixing. The remaining sensitivity is associated with latent heat flux alterations.

My future work following this publication explores the role of coupling in the sensitivity of the ITCZ to the convective parameterisation scheme. Prescribing the SSTs implies an arbitary ocean heat transport, however in the real world the ocean heat transport is sensitive to the atmospheric circulation. Does this sensitivity between the ocean heat transport and atmospheric circulation affect the sensitivity of the ITCZ to convective mixing?

Thanks to my funders, SCENARIO NERC DTP, and supervisors for their support for this project.

References:

Blackburn, M. et al., (2013). The Aqua-planet Experiment (APE): Control SST simulation. J. Meteo. Soc. Japan. Ser. II, 91, 17–56.

Bischoff, T. and Schneider, T. (2016). The Equatorial Energy Balance, ITCZ Position, and Double-ITCZ Bifurcations. J. Climate., 29(8), 2997–3013, and Corrigendum, 29(19), 7167–7167.

 

It’s a #GlobalHeatwave

Email: s.h.lee@pgr.reading.ac.uk 

Sometimes a simple tweet on a Sunday evening can go a long way.

This summer’s persistent dry and warm weather in the UK has led to many comparisons to the summer of 1976, which saw a lethal combination of the warmest June-August mean maximum temperatures (per the Met Office record stretching back to 1910) and a record-breaking lack of rainfall (a measly 104.6 mm – since bested by 1995’s 103.0 mm –  compared with the record-wettest 384.4 mm in 1912). When combined with a hot summer the year before and a dry winter, water shortages were historic and the summer has become a benchmark to which all UK heatwaves are compared. So far, 2018 has set a new record for the driest first half of summer for the UK (a record stretching back to 1961) but it remains to be seen whether it will truly rival ’76.

All these comparisons made me wonder: what did global temperatures look like during the heatwave of 1976? Headlines have been filled with news of other heatwaves across the Northern Hemisphere, including in AfricaFinland and Japan. Was the UK heatwave in 1976 also part of a generally warm pattern?

So I had a look at the data using the plotting tool available on NASA’s Goddard Institute for Space Studies (GISS) site, and composed a relatively simple tweet which took off in a manner only fitting for a planet undergoing rapid warming:

At the time of writing, it’s been retweeted over 8,800 times in under 48 hours and featured as part of a Twitter Moment. Even Héctor Bellerín, a footballer for Arsenal, retweeted it!

Once the tweet had taken on a life of its own, I was also well aware of so-called “climate change deniers” (I don’t like the term, but it’s the best I can do) lurking out there, and I was somewhat apprehensive of what might get said. I’ve seen Paul Williams have many not-so-pleasant Twitter encounters on the subject of climate change. However, I was actually quite surprised. Aside from a few comments here and there from ‘deniers’ (usually focusing on fundamental misunderstandings of averaging periods and the interpolation used by NASA to deal with areas of low data coverage), the response was generally positive. People were shocked, frightened, moved…and thankful to have perhaps finally grasped what global warming meant.

I endeavoured to keep it cordial and scientific, as the issue is too big to make enemies over – we all need to work together to tackle the problem.

So, maybe now I have some idea how Ed Hawkins felt when his global warming spiral went viral and eventually ended up in the 2016 Olympics opening ceremony. I guess the biggest realisation for me is that, as a scientist, I’m familiar with graphics such as these showing the extent of global warming, but the wider public clearly aren’t – and that’s part of the reason I believe the tweet became so popular.

I can’t say that the 2018 UK heatwave is due to global warming. However, with unusually high temperatures present across the globe, it takes less significant weather patterns to produce significant heatwaves in the UK (and elsewhere). And with the jet streams that guide our weather systems already feeling the effects of climate change (something which I researched as an undergraduate), we can only expect more extremes in the future.

Royal Meteorology Conferences

From 3rd-6th July 2018 the Royal Meteorological Society (RMetS) held two national conferences at the University of York. The Atmospheric Science Conference, joint with NCAS, started off the week and brought together scientists to present and discuss the latest research findings in weather, climate and atmospheric chemistry. The following two days brought the RMetS Student Conference. Both events were well attended by PhD students from Reading and provided a great opportunity to share our work with the wider scientific community.

For a summary of the work presented by Reading students, stick around until the end of the blog!

Atmospheric Science Conference 2018

Weather, Climate and Air Quality

Many of the presentations focused on seasonal forecasting with Adam Scaife (Met Office) giving a keynote address on “Skilful Long Range Forecasts for Europe”. He presented an interesting analysis on the current progress of predicting the North Atlantic Oscillation showing that there is skill in current predictions which could be improved even further by increasing ensemble size. Adam was also awarded the prestigious Copernicus Medal at the conference dinner. Another notable talk was by Reading’s own Ed Hawkins, who presented the benefits of using citizen scientists to rescue weather records. A summary of Ed’s presentation can be accessed below, and you can read more about research involving Citizen Science in Shannon Jones’ blog.

The poster sessions at the conference also gave a great opportunity to look at the breadth of work going on in institutions around the UK. It was also a great time to catch up with colleagues and forge new academic connections.

One of the highlights of the conference was having the conference dinner in the National Railway Museum. This was a fantastic yet surreal location with dining tables set up in the station hall overlooking a suite of old steam trains . The event was made even better by watching England‘s quarter-final world cup game!

conference_dinner

Evolution of Science: Past, Present and Future

Students & Early Career Scientist Conference

The student conference is open to all students with an interest in meteorology, from undergraduate to PhD and early career scientists. The conference aimed to give students the opportunity to meet each other and present their work at an early stage in their career before attending other academic conferences. For many of those attending from Reading this was their first time presenting research at an event outside of the department and provided a great experience to communicate their work with others. Work presented varied from radiative forcing to normal empirical modes (summaries of talks are below). There were also a number of keynote speakers and workshops aimed at addressing the current challenges in atmospheric sciences and skills that are important for researchers.

student_workshop_1
Rory Fitzpatrick, presenting on skills for writing as an academic. “I have the Best Words” – How to write articles that impact bigly”

Of course there was also time for socialising with an ice-breaker dinner and pub quiz  and a formal Conference dinner on the Thursday. This was the second student conference I have attended and it was a really great place to discuss my work and meet other students from around the country. I have also attended other academic events with several people that I met at the conference last year, it’s always great to see a friendly face!

The student conference is organised by a committee of students from around the UK. Being on the committee was a great opportunity to learn more about how conferences work and to practice skills such as chairing sessions. It has also been great to get to know lots of different people working within meteorology. If you’re interested in helping organise next year’s conference please do get in touch with Victoria Dickinson at RMetS (Victoria.Dickinson@rmets.org) or if you’re thinking about attending then you can start by joining the society where you’ll hear about all the other great events they host.

Highlights of the work presented by Reading students:

Godwin Ayesiga presented work on the convective activity that connects Western and Eastern equatorial Africa. Investigating how intraseasonal modes of variability influence intense rainfall.

Matt Priestley presented an assessment of the importance of windstorm clustering on European wintertime insurance losses. More details of this work can be found here.

Lewis Blunn presented his work looking into the ‘grey zone’ of turbulence at model grid scale lengths of 100 m – 1 km. At these scales turbulence is partially resolved by the grid but still needs to be partially parameterised. Lewis finds that spurious grid scale features emerge at scales where turbulence is partially resolved. Model results are poorer in this ‘grey zone’ than when turbulence is fully resolved or fully parameterised.

Alec Vessey presented his work evaluating the representation of Arctic storms in different reanalysis products. He found that there is a difference between different reanlysis and so care should be taken when using these products to analyse Arctic storms.

Dominic Jones presented a technique for extracting modes of variability from atmospheric data, and a test dataset that has been developed to use this technique to examine the relationship of modes of variability associated with the jet-latitude.

Rachael Byrom presented a motivation for quantifying methane’s shortwave radiative forcing. Her work demonstrated a need to use a high resolution narrow-band radiation model to accurately calculate forcings in atmospheric models.

Andrea Marcheggiani presented a poster on the role of resolution in predicting the North Atlantic storm track. An energy budget of the winter climatology (DJF 1979-2018) was presented.

Sally Woodhouse presented her work on the impact of resolution on energy transports into the Arctic. She has found that increasing atmospheric resolution increases the energy transport in the ocean to better agree with observations.

Kaja Milczewska presented work on evaluating the inaccuracies of predicting air quality in the UK.

Having recently passed her viva, Caroline Dunning’s presentation was on precipitation seasonality over Africa under present and future climates. Caroline has developed a new methodology for determining the beginning and end of the wet season across Africa. This has been applied to CMIP5 model output to look at future changes in wet seasons across Africa under climate change.