Downward trends in Arctic sea-ice extent in recent decades are a striking signal of our warming planet. Loss of sea ice has major implications for future climate because it strongly influences the Earth’s energy budget and plays a dynamic role in the atmosphere and ocean circulation.
Comprehensive numerical models are used to make long-term projections of the future climate state under different greenhouse gas emission scenarios. They estimate that the Arctic ocean will become seasonally ice free by the end of the 21st century, but there is a large uncertainty on the timing due to the spread of estimates across models (Fig. 1).
What causes this spread, and how might it be reduced to better constrain future projections? There are various factors (Notz et al. 2016), but of interest to our work is the large-scale forcing of the atmosphere and ocean. The mean atmospheric circulation transports about 3 PW of heat from lower latitudes into the Arctic, and the oceans transport about a tenth of that (e.g. Trenberth and Fasullo, 2017; 1 PW = 1015 W). Our goal is to understand the relative roles of Ocean and Atmospheric Heat Transports (OHT, AHT) on long timescales. Specifically, how sensitive is the sea-ice cover to deviations in OHT and AHT, and what underlying mechanisms determine the sensitivities?
We developed a highly simplified Energy-Balance Model (EBM) of the climate system (Fig. 2)—it has only latitudinal variations and is described by a few simple equations relating energy transfer between the atmosphere, ocean, and sea ice (Aylmer et al. 2020). The latitude of the sea-ice edge is an analogue for ice extent in the real world. The simplicity of the EBM allows us to isolate the basic physics of the problem, which would not be possible going directly with the complex output of a full climate model.
We generated a set of simulations in which OHT varies and checked the response of the ice edge. This is a measure of the effective sensitivity of the ice cover to OHT (Fig. 3a)—it is not the actual sensitivity because AHT decreases (Fig. 3b), and we are really seeing in Fig. 3a the net response of the ice edge to changes in both OHT and AHT.
This reduction in AHT with increasing OHT is called Bjerknes compensation, and it occurs in full climate models too (Outten et al. 2018). Here, it has a moderating effect on the true impact of increasing OHT. With further analysis, we determined the actual sensitivity to be about 1.5 times the effective sensitivity. The actual sensitivity of the ice edge to AHT turns out to be about half that to the OHT.
What sets the difference in OHT and AHT sensitivities? This is easily answered within the EBM framework. We derived a general expression for the ratio of (actual) ice-edge sensitivities to OHT (so) and AHT (sa):
A higher-order term has been neglected for simplicity here, but the basic point remains: the ratio of sensitivities mainly depends on the parameters BOLR and Bdown. These are bulk representations of atmospheric feedbacks and determine the efficiency of outgoing and downwelling longwave radiation, respectively. They are always positive, so the ice edge is always more sensitive to OHT than AHT.
The interpretation of this equation is simple. AHT converging over the ice pack can either be transferred to the underlying sea ice, or radiated to space, having no impact on the ice, and the partitioning is controlled by Bdown and BOLR. The same amount of OHT converging under the ice pack can only go through the ice and is thus the more efficient driver.
Climate models with larger OHTs tend to have less sea ice (Mahlstein and Knutti, 2011). We have also found strong correlations between OHT and the sea-ice edge in several of the models listed in Fig. 1 individually. Ice-edge sensitivities and B values can be determined per model, and our equation predicts how these should be related. Our work thus provides a way to investigate how much physical biases in OHT and AHT contribute to sea-ice-projection uncertainties.
Methane (CH4) is a potent greenhouse gas. Its ability to effectively alter fluxes of longwave (thermal-infrared) radiation emitted and absorbed by the Earth’s surface and atmosphere has been well studied. As a result, methane’s thermal-infrared impact on the climate system has been quantified in detail. According to the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5), methane has the second largest radiative forcing (0.48 W m-2) of the well-mixed greenhouse gases after carbon dioxide (CO2) (Myhre et al. 2013, See Figure 1). This means that due to its change in atmospheric concentration since the pre-industrial era (from 1750 – 2011), methane has directly perturbed the tropopause net (incoming minus outgoing) stratospheric temperature-adjusted radiative flux by 0.48 W m-2, causing the climate system to warm.
However, an important effect is missing from the current IPCC AR5 estimate of methane’s climate impact – its absorption of solar radiation. In addition to longwave radiation, methane also alters fluxes of incoming solar shortwave radiation at wavelengths between 0.7 – 10 µm.
Until recently this shortwave effect had not been thoroughly quantified and as such was largely overlooked. My PhD work focuses on quantifying methane’s shortwave effect in detail and aims to build upon the significant, initial findings of Etminan et al. (2016) and a more recent study by Collins et al. (2018).
Etminan et al. (2016) analysed methane’s absorption of solar near-infrared radiation (at wavelengths between 0.2 – 5 µm) and found that it exerted a direct instantaneous, positive forcing on the climate system, largely due to the absorption of upward scattered, reflected near-infrared radiation by clouds in the troposphere. Essentially, this processes results in photons taking multiple passes throughout the troposphere, which in turn results in increased absorption by CH4. Figure 2 shows the net (downwards minus upwards) spectral variation of this forcing at the tropopause under all-sky (i.e. cloudy) conditions. Here it is clear to see methane’s three key absorption bands across the near-infrared region at 1.7, 2.3 and 3.3 µm.
As Etminan et al. (2016) explain, following a perturbation in methane concentrations all of these bands decrease the downwards shortwave radiative flux at the tropopause, due to increased absorption in the stratosphere. However, the net sign of the forcing depends on whether this negative contribution compensates over increased absorption by these bands in the troposphere (which constitutes a positive forcing). As Figure 2 shows, whilst the 3.3 µm band has a strongly net negative forcing due to the absorption of downwelling solar radiation in the stratosphere, both the 1.7 µm and 2.3 µm bands have a net positive forcing due to increased CH4 absorption in an all-sky troposphere. When summed across the entire spectral range, the positive forcing at 1.7 µm and 2.3 µm dominates over the negative forcing at 3.3 µm – resulting in a net positive forcing. Etminan et al. (2016) also found that the nature of this positive forcing is partly explained by methane’s spectral overlap with water vapour (H2O). The 3.3 µm band overlaps with a region of relatively strong H2O absorption, which reduces its ability to absorb shortwave radiation in the troposphere, where high concentrations of H2O are present. However, both the 1.7 µm and 2.3 µm bands overlap much less with H2O, and so are able to absorb more shortwave radiation in the troposphere.
In addition to this, Etminan et al. (2016) also found that the shortwave effect serves to impact methane’s stratospheric temperature-adjusted longwave radiative forcing (the process whereby stratospheric temperatures readjust to radiative equilibrium before the change in net radiative flux is calculated at the tropopause; Myhre et al. (2013)). Here, absorption of solar radiation in the stratosphere results in a warmer stratosphere, and hence increased emission of longwave radiation by methane downwards to the troposphere. This process results in a positive tropopause longwave radiative forcing. Combing both the direct, instantaneous shortwave forcing and its impact on the stratospheric-temperature adjusted longwave forcing, Etminan et al. (2016) found that the inclusion of the shortwave effect enhances methane’s radiative forcing by a total of 15%. The results presented in this study are significant and indicate the importance of methane’s shortwave absorption. However, Etminan et al. (2016) note several areas of uncertainty surrounding their estimate and highlight the need for a more detailed analysis of the subject.
My work aims to address these uncertainties by investigating the impact of factors such as updates to the HITRAN spectroscopic database (which provides key spectroscopic parameters for climate models to simulate the transmission of radiation through the atmosphere), the inclusion of the solar mid-infrared (7.7 µm) band in calculations of the shortwave effect and potential sensitivities, such as the vertical representation of CH4 concentrations throughout the atmosphere and the specification of land surface albedo. My work also extends Etminan et al. (2016) by investigating the shortwave effect at a global spatial resolution, since a two-atmosphere approach (using tropical and extra-tropical profiles) was employed by the latter. To do this I use the model SOCRATES-RF (Checa-Garcia et al. 2018) which computes monthly-mean radiative forcings at a global 5° x 5° spatial resolution using a high resolution 260-band shortwave spectrum (from 0.2 – 10 µm) and a standard 9-band longwave spectrum.
Initial results calculated by SOCRATES-RF confirm that methane’s all-sky tropopause shortwave radiative forcing is positive and that the inclusion of shortwave bands serves to increase the stratospheric-temperature adjusted longwave radiative forcing. In total my calculations estimate that the shortwave effect increases methane’s total radiative forcing by 10%. Whilst this estimate is lower than the 15% proposed by Etminan et al. (2016) it’s important to point out that this SOCRATES-RF estimate is not a final figure and investigations into several key forcing sensitivities are currently underway. For example, methane’s shortwave forcing is highly sensitive to the vertical representation of concentrations throughout the atmosphere. Tests conducted using SOCRATES-RF reveal that when vertically-varying profiles of CH4 concentrations are perturbed, the shortwave forcing almost doubles in magnitude (from 0.014 W m-2 to 0.027 W m-2) when compared to the same calculation conducted using constant vertical profiles of CH4 concentrations. Since observational studies show that concentrations of methane decrease with height above the tropopause (e.g. Koo et al. 2017), the use of realistic vertically-varying profiles in forcing calculations are essential. Highly idealised vertically-varying CH4 profiles are currently employed in SOCRATES-RF, which vary with latitude but not with season. Therefore, the realism of this representation needs to be validated against observational datasets and possibly updated accordingly.
Another key sensitivity currently under investigation is the specification of land surface albedo – a potentially important factor controlling the amount of reflected shortwave radiation absorbed by methane. Since the radiative properties of surface features are highly wavelength-dependent, it is plausible that realistic, spectrally-varying land surface albedo values will be required to accurately simulate methane’s shortwave forcing. For example, vegetation and soils typically tend to reflect much more strongly in the near-infrared than in the visible region of the solar spectrum, whilst snow surfaces reflect much more strongly in the visible (see Roesch et al. 2002). Currently in SOCRATES-RF, globally-varying, spectrally-constant land-surface albedo values are used, derived from ERA-Interim reanalysis data.
Figure 3 compares the spatial distribution of methane’s annual-mean all-sky tropopause shortwave forcing as calculated by SOCRATES-RF and Collins et al. (2018). Both calculations exhibit the same regions of maxima across, for example, the Sahara, the Arabian Peninsula, and the Tibetan Plateau. However, it is interesting to note that the poleward amplification shown by SOCRATES-RF is not evident in Collins et al. (2018). The current leading hypothesis for this difference is the fact that the land surface albedo is specified differently in each calculation. Collins et al. (2018) employ spectrally-varying surface albedo values derived from satellite observations. These are arguably more realistic than the spectrally-constant values currently specified in SOCRATES-RF. The next step in my PhD is to further explore the interdependence between methane’s shortwave forcing and land-surface albedo, and to work towards implementing spectrally-varying albedo values into SOCRATES-RF calculations. Along with the ongoing investigation into the vertical representation of CH4 concentrations, I aim to finally deliver a more definitive estimate of methane’s shortwave effect.
Checa-Garcia, R., Hegglin, M. I., Kinnison, D., Plummer, D. A., and Shine, K. P. 2018: Historical tropospheric and stratospheric ozone radiative forcing using the CMIP6 database. Geophys. Res. Lett., 45, 3264–3273, https://doi.org/10.1002/2017GL076770
Etminan, M., G. Myhre, E. Highwood, K. P. Shine. 2016: Radiative forcing of carbon dioxide, methane and nitrous oxide: a significant revision of methane radiative forcing, Geophys. Res. Lett., 43, https://doi.org/10.1002/2016/GL071930
IPCC, 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, 1535 pp
Myhre, G., et al. 2013: Anthropogenic and natural radiative forcing, in Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, edited by T. F. Stocker et al., pp. 659–740, Cambridge Univ. Press, Cambridge, U. K., and New York.
Roesch, A., M. Wild, R. Pinker, and A. Ohmura, 2002: Comparison of spectral surface albedos and their impact on the general circulation model simulated surface climate, J. Geophys. Res., 107, 13-1 – 13-18, https://doi.org/10.1029/2001JD000809
A challenge most researchers will be familiar with is how do you explain your research to friends and family in a way that’s readily understandable to non-experts? The web comic xkcd decided to try describing the Saturn-V moon rocket (‘Up Goer Five’) using only the ten-hundred most commonly used words.
Feeling that we needed something a bit different during the Covid-19 lockdown, and inspired by our predecessors in a previous blog post, we decided to have a go at explaining our own research using the xkcd ‘simplewriter’ tool in our weekly PhD group meeting. While restricting yourself to only the ten-hundred most commonly used words seemed to make things more confusing at times, it certainly gets one thinking about what complicated words don’t need to be used… and results in some amusing explanations!
Devon Francis – Advanced methods for assimilating satellite data in numerical weather prediction
When we find out how hot or cold it is outside, sometimes it is not right as sometimes it can be too hot and sometimes it can be too cold. If over lots of time it is more often hot than it is cold, or the other way around, then we have to move how much warmer we have found it to be so that we can write down how warm it actually is. We can also imagine how hot or cold it will be by thinking about how hot or cold it has been before, or how hot or cold it is on a day that is like today. But if it was hotter before but we did not notice, then we may think it is also hotter today, so we may need to change what we thought about before as well as what we think about today. Sometimes it is hard to work out why we are wrong about how warm it is today. It could be because we can not decide how warm it is or it could be because it has been too hot or cold in the past. My job is to decide why we think it is too hot or cold, so that tomorrow we can know how warm it will be!
I look at the warming of the sky, and see if that is different from what the sky was like before it was warming. The cool thing is, that we only look at those moments where the sky was attacking us. With us we are talking about humans on the ground. Any form of attack such as warm, cold, strong wind and rain is a thing. I do this by telling a story. The story helps in finding what made the attack happen, if it was the warming of the sky or maybe something else.
Max Coleman – Climate response to short-lived pollutants
In the air are very small things that can hurt us if we breathe them in. However, these things can also change how warm the Earth is by catching light or sending it back to space, and also change how much it rains by changing how much cloud there is in the sky. If humans make more of these things then we will change how warm the Earth is. I study how these small things change how much cloud there is and other things, which in turn change how much light is caught or sent back to space. I do this using a computer which can pretend to be the Earth and works out how much light is caught or sent back to space by changing these things.
Mark Prosser – Using aviation meteorology to improve aircraft safety
My finish-big-school work piece is about crazy wind and sky high-lighting causing problems for flying buses. Crazy wind is already a problem for flying buses which we think will get worse with the Earth hotter than it is now (it is already hotter than before). Didn’t-see-it-coming crazy wind is especially a problem for flying buses because you can’t see into the future. So we brain people use things that big brain animals use on computer brain best guesses, but these aren’t perfect. My work piece is saying good or bad about these things by putting them on real flying buses meeting didn’t-see-coming crazy wind (things really go wrong when these two meet). Day to day I pull down guesses from big computer brain and use not friend long toothy animal to eat these guesses. Right now I am writing up a paper using over leaf and hope it will be put on many bits of paper and seen by some other brain people and liked and talked about.
Jake Bland – The control of cloud and moisture on extratropical cyclone evolution
People use computers to tell them if there will be rain, sun, clouds, or storms in the future, and if it will be hot or cold. To be right about these things the computers need to know about how hot the sky is and how much water is in it, among other things. We get this information by looking with machines on earth, putting machines into the sky, or putting machines in space and telling them to look down at the sky.
You can think of the sky as being in layers, where we are in the lowest layer, and that is where most of our clouds are too. The layer above that is drier, changes less and changes more slowly, but the water in there is still important for being right about the future. It is harder to look at the water in the higher layer than it is to look at how hot it is, or the water in the lower layer, so computers often think there is too much water there. They also think it will be colder up there in the future than it really will be.
I have worked out how wrong computers are about how wet it is and how cold it will be using guesses made four years ago, and machines that were thrown into the sky to look at the same times that were guessed about. (I’m now finishing off writing about this so other people can know about it too!) I have also looked at how the computer being wrong about water and wrong about how cold it is in the future are tied together.
Now I have made a lot of future guesses about those days four years ago, some where the computers have the wrong information like they did before, and some where they have the better information. I am also getting the computer to tell me how it is making the guesses, so I can try to find out what it is wrong with the computer to make it think the higher sky layer is too wet. Using these two sets of guesses I can also find out exactly how important getting how wet it is right for telling the future well.
Linda Toca – Analysis of peatland carbon dynamics using combined optical and microwave satellite data
I study very important wet places and the tiny parts they breath in and out. Most of the time I use space pictures and computer to come up with something to show, but later I plan to go outside to the wet places and see in person how much tiny parts they breath out, and check if it is the same as computer says. I also want to use small flying thing that can make more pictures of wet places from above to see if they can help computer work better. It is very important as the wet places hold huge numbers of tiny parts and we need to know what could happen in the further times with air getting hotter and dry times longer. Do the wet places give much more tiny parts with hotter air and is it enough with space pictures to tell how much, that is what we want to find out.
Wilfred Calder-Potts – Quantifying the impact of increased atmospheric CO2 and climate change on photosynthesis using Solar Induced Fluorescence
When trees accept light they make food. They also make their own light. If you see this light you can guess how much food they are making. Some people have built machines to see this light from space. But we are not sure exactly how much food they are making. I am trying to understand exactly how much food is made, using the light. I am doing this by seeing this light from trees which are hot or cold or have more or less bits of food.
While some may consider the philosophy of science a complicating distraction, I think I ignore it at my peril. Certainly climate science is not without its philosophical issues; one might even say it is riddled with them…
David Stainforth (LSE), the keynote speaker stated it thus: “The study of anthropogenic climate change presents a range of fundamental challenges for scientific and wider academic inquiry. The essential nature of these challenges are often not well appreciated.”
So how does climate science compare with other natural sciences? Opinions abounded, but here are just some I can recall: 1 – We can’t really conduct controlled experiments in the way that other natural scientists can, as we have just the one Earth and can’t turn back time (we have to beware of post-hoc explanations, and some of our predictions may never be verifiable/falsifiable). 2 – We therefore rely heavily on numerical models. 3 – We’re also doing our science while the climate is changing around us, and thus there is a strong sense of urgency. 4 – There is therefore a pressure to be multidisciplinary.
On a more practical side, David’s talk left me with a novel way of thinking about ‘climate’. Thinking about a climate metric, such as temperature, I would have thought hitherto of simply a mean and a standard deviation (a very Gaussian way of looking at it!). But David argued that climate is often best conceived of as a more generalised distribution. While a bell curve is symmetric, unimodal, a distribution need not be (and this can be true in the climate system). Studying and predicting a stable climate distribution may already be difficult but studying and predicting a changing one is even harder!
Now for a bit of a whirlwind tour of other arguments/points. There was Reading’s own Ted Shepherd arguing that in climate science we often over focus on avoiding false positives (type I errors) at the expense of incurring false negatives (type II). In other words, we get reliability at the price of informativeness, especially at the regional level where policy makers are somewhat eager to be informed.
Then there was Geoff Vallis (University of Exeter) who posed the question “If models were perfect, would we care how they worked?”. Perhaps a pertinent question, as there appears to be a trade-off, an inverse correlation between complexity of models and our ability to understand them. If the models became so complex that they were beyond the abilities of any human past or future to comprehend, what would we do then? If they become as complicated as the Earth system itself, surely we would have long since lost any grasp on them? Indeed, models already appear to be predicting phenomena without us understanding why. Complexity is not necessarily accuracy (How do we assess accuracy in climate science?) and Erica Thompson (LSE) highlighted the importance of ‘getting out of model land’, and staying with the real world, something some of us may need occasional reminding of.
What even are models? Two expressions given were ‘book-keeping devices’ (Wendy Parker) and ‘Prosthesis of your brain (Erica Thompson). No doubt there were others.
Marina Baldissera Pacchetti (University of Leeds) talked about her work on climate information for adaptation that gives us: “guidelines on when quantitative statements about future climate are warranted and potentially decision-relevant, when these statements would be more valuable taking other forms (for example, qualitative statements), and when statements about future climate are not warranted at all.”
In the afternoon, there were breakout ‘lightning’ discussions. We could choose to join 1 of the following 8 groups:
1. Should we aim to estimate the mean/expectation behaviour of the climate or focus on the worst-case? 2. Is the way we go about climate science now the only way of doing it? 3. If our computers were infinitely fast, what science would we do with them? 4. If our models were infinitely good, what science would be left to do? 5. What fact, if only we knew it, would have the biggest impact on climate change? 6. How should climate science approach the question of geoengineering? 7. What is the benefit to society of general circulation models? 8. What is the public needing to know, and are we working enough on these questions?
My group was 3, but we ended up accidentally merging with 4 and made for a very interesting and varied discussion!
Which group would you have been most pulled towards had you been there? What philosophical thoughts on climate science have you had? What do you think is the most under-appreciated? I would be interested to hear your thoughts.
Many thanks to the event organiser Goodwin Gibbins (Imperial) and all involved for a thoroughly enjoyable and stimulating day.
If anyone would like to get more into the Philosophy of Science, I would recommend this thoroughly engaging 10-hour course of lecture by the Uni of Toronto on YouTube, the trailer of which can be viewed here.
The global COVID-19 lockdown is undoubtedly resulting in curiously low levels of air pollution. Although it might seem inappropriate to seek a silver lining during a global pandemic, the fact that the air really does seem cleaner gives my PhD topic a little more everyday credibility, which – at least for me – is quite nice.
You may have already seen satellite pictures of the effects of the lockdown in northern Italy (and other major cities) on surface nitrogen dioxide (NO2) concentrations. You may have been able to breathe some cleaner air where you live these past few weeks. It feels like we are in the middle of some sort of major air quality experiment, some kind of simulation conducted by a clueless PhD student…
What you might not notice, however, is the rise in near-surface ozone (O3) during a string of warm, sunny days. While it isn’t a primary pollutant like NO2 or particulate matter, O3 is closely associated with the amount of NOx (NO + NO2) in the air. It is also invisible to the naked eye – unless it forms photochemical smog. O3 can be harmful in short bursts of elevated concentrations to people who already suffer with asthma and other respiratory problems, which could prove to be problematic since COVID-19 is itself a respiratory disease.
Weather conditions in Reading throughout the majority of April were favourable for O3 production: lots of solar radiation and weak winds. In fact, Reading experienced its sunniest April on record, along with some of the warmest April days on record. It is therefore not surprising that peak daytime concentrations of O3 creep up over a week of warm, calm weather. For example, a measuring site located between two busy roads in Reading gives a clear indication of what the mixture of favourable conditions alongside low NOx emissions can do: Figure 1 shows that peak daytime concentrations rose everyday between 02/04 – 12/04, when the air was stagnant and thus O3 tended to accumulate within the atmospheric boundary layer. The peak concentrations between 08/04 and 12/04 are typical of “moderate” levels on DEFRA’s Daily Air Quality Index (DAQI), and are close to the WHO safe concentration exceedance guidelines.
The DAQI is dictated by the highest concentration of any one of the five pollutants deemed harmful to human health: ozone, nitrogen dioxide (NO2), sulphur dioxide (SO2), and PM2.5 / PM10 (particulate matter). Figure 2 shows a moderate DAQI in parts of south and north-east England (and in fact, the map looked a lot more yellow and orange on Friday 04/04).
The answer is probably that ozone has a “love-hate” relationship with NOx, Volatile Organic Compunds (VOCs) and the weather. It skyrockets when it’s sunny and skies are blue. High pressure systems and calm weather trap much of the existing ozone within the boundary layer, near to the ground. In particular, if easterly / southerly winds prevail, they can transport both ozone and its precursors from the continent – this often happens when there is an anticyclone over the UK. Therefore, high ozone episodes tend to occur in the spring / summer, due to the frequency of such ozone-favourable conditions.
Fossil fuel combustion releases NOx, some of which is in the form of NO and goes on to oxidise to create more NO2, or it can react with VOCs, or it can directly react with ozone. The usually abundant NO and other VOCs from vehicle emissions and industry are now significantly lower than usual, so the process of ozone scavenging by NO is minimised.
On top of that, NO has a short lifetime (maybe a few minutes), and can quickly oxidise to form NO2, which has a longer lifetime and can therefore travel on to rural areas. Often, rural regions will have higher average ozone concentrations than cities (which might seem counter-intuitive!). Although the emissions in those areas are lower, they can experience net ozone production from the additional NO2 which has travelled downwind from a nearby city. In relatively clean tropospheric air, the production / destruction of ozone is closely linked to the ratio of NO to NO2 – an equilibrium known as the photostationary state (Leighton, 1961) – and there are some studies to show a negative correlation between annual mean NOx and O3 measurements in both rural and urban areas (e.g. Bower et al., 1989). But none of this is particularly simple, because there will always be VOCs present in air, and ozone production / destruction is also highly sensitive to the ratio of NO : VOC – this was not fully understood until Greiner (1967) and several subsequent studies, which explained the role of the hydroxyl radical OH in the reaction chain to create NO2 without destroying ozone. Another phenomenon is the ‘weekend effect’, where weekday emissions tend to be quite different from emissions during the weekend because there is no morning / evening rush-hour traffic and resultant NO (Seguel et al., 2012). If VOC levels remain high, ozone production is favoured.
Let us return to the present day. How might the weather conditions affect the delicate balance between NOx, VOCs and ozone? And what about other particles closely monitored throughout the pandemic, such as particulate matter (PM)?
February was unusually wet and windy in the UK. Strong winds can disperse both NO2 and PM, while rain is an efficient sink of PM by physically washing out the particles. Both pollutants have been monitored closely at a number of locations globally over the past few weeks, as they are good indicators of emissions (and ozone is not). Before the gloriously sunny weather came, I wondered about ways of distinguishing between causes of the unusually low NO2 / PM concentrations: what proportion is attributable to the lockdown, and what is attributable to a very wet and windy February / March period in this region? How might ambient ozone concentrations change as we move into the summer, as lockdown measures might begin to gradually relax and pollution returns to pre-lockdown levels? And what does this mean for people who are vulnerable to respiratory issues aggravated by ozone? All these questions – and many more – are currently being explored by air quality experts all over the world, hopefully reaching some conclusions in time for us all to act on them timely and appropriately.
Stay home, stay safe, and thank you for reading. Please leave a comment or send an email if you have any questions (I’ll be happy to answer) or corrections: I am a PhD student and there are probably still some gaps in my understanding.
As PhD students, working from home is an option for many of us on a “normal” day – as indeed is increasingly the case with jobs which primarily need just an Internet connection. But, thanks to COVID-19, working from home (WFH) is our new collective reality. So how can we make this work well, when for many, our offices may only now be a few steps away from our beds? We asked around for advice on this matter from current PhD students.
Remember to take a break every half an hour or so. Go away from the desk!
It can be easy to forget to take a break when you’re “at home”, even if you’re also “at work”, and especially when you’re likely closer to the kettle/food/toilet than you would be otherwise. Get up, move around!
Stick to a regular schedule: when you wake up, go to sleep, work, relax, etc.
This is great advice for doing a PhD in general, but even more pertinent now that our routines have been turned upside down.
Pretend that you “go to and from work”, i.e take a morning and afternoon walk/cycle to mark the start and end of your work day.
A commute can be a great time to wake up in the morning and wind down in the evening. Get creative with what you can (safely, and in accordance with government guidance) do to replace your commute during this time.
Pretend that you go to work by dressing accordingly, it makes the brain active and makes you stronger against the ‘do something else’ or ‘ relax’ mode activated by the comfy at home clothes.
It’s tempting to work wearing pyjamas, but will this help your productivity and mindset? Getting dressed for work can also help to maintain your work-life balance.
Look after your posture. If possible, sit at a desk with a screen at the right height.
Try to follow standard health and safety advice when it comes to working long hours at a desk. If possible, invest time and money in making your home working environment a comfortable and non-straining place to be.
If you can at all help it, don’t work in the room where you sleep. It can cause difficulties sleeping.
This also helps add some breaks and changes in your day, which can help to maintain focus and motivation.
Enjoy the benefits of working from home: take a break to actually cook lunch, get things done around the house. Let yourself appreciate the things that are handy about it as well as the negatives.
Being able to get away from your work and do something like ironing, cooking, baking or cleaning might actually help your productivity and concentration by providing a better break than you might otherwise get in an office. Embrace it!
Schedule social e-contact. Don’t let yourself go more than a day without at least hearing someone’s voice on the phone. Use the opportunity to reconnect with old friends.
In Reading, we’re making extensive use of Microsoft Teams to remain in contact with each other and try to mimic our vibrant work atmosphere.
Do (as long as it’s safe to do so) go for walks, head outside, make sure you do some exercise twice a week.
Luckily, we’ve got some very nice weather this week in most of the UK. But do please adhere to social distancing guidelines when you do go outside.
It can be easy for the lines between work and life outside of work to be blurred during a PhD at the best of times, and WFH can make this more problematic. Set your hours, and stick to it.
If you work 8-4, work 8-4! At 4pm, switch your computer off and do something different. Without an evening commute, it can be trickier to bring an end to your working day, but this is probably one of the most important things to maintain.
Most operating systems, including Windows 10, support multiple virtual desktops. Try using one of those for your virtual “work” PC, and another as your virtual “home” PC. Then you can keep the two segregated.
At the end of the day you can switch to your “home” desktop, and then return to “work” the following day.
Exposure to pollutants in the air we breathe may trigger respiratory problems. Pollutants such as ozone () and particulate matter (PM) – particles of about 1/20th of the width of a hair strand – can get into our lungs and cause inflammation, alter their function, or otherwise cause trouble for the cardiovascular system – especially in people with existing underlying respiratory conditions. Although high pollution episodes in the UK are infrequent, the public becomes aware of the associated problems during events such as red skies, in part caused by long-range transport of Saharan dust. Furthermore, the World Health Organisation (WHO) estimates that 85% of UK towns regularly exceed the safe annual PM limit. It is therefore important to forecast surface pollution concentrations accurately in order to enable the public to mitigate some of those adverse health risks.
In general, air pollution can be difficult to forecast near the surface because of the multitude of factors which affect it. Incorrectly modelling chemical processes within the atmosphere, surface emissions or indeed the meteorology can lead to errors in predicting ground-level pollution concentrations. It is well accepted within the literature that weather forecasting is of decisive importance for air quality. Thus, my PhD project tries to link forecast errors in meteorological processes within the atmospheric boundary layer (BL) with forecast errors in pollutants such as and (nitrogen dioxide) using the operational air quality forecasting model in the UK, the Air Quality in the Unified Model (AQUM). This model produces an hourly air quality forecast issued to the public by DEFRA in the form of a Daily Air Quality Index (DAQI) and is verified against surface-based observations from the Automatic Urban and Rural Network (AURN).
A three-month evaluation of hourly forecasts from AQUM shows a delay in the average increase of the morning + (‘total oxidant’) concentrations when compared to AURN observations. We also know that BL depth is important for the mixing of pollutants – it acts as a sort of lid on top of the lower part of the troposphere. Since the noted lag in total oxidant increase in our model occurs exactly at the time of the morning BL development, we can form a testable hypothesis: that an inaccurate representation of BL processes – specifically, morning BL growth – leads to a delay in entrainment of -rich air masses from the layer of air above it: the residual layer. It has been suggested in the literature that when the daytime convective mixed layer collapses upon sunset, the remaining pollutants are effectively trapped in the leftover (‘residual’) layer, and thus can act as a night-time reservoir of above the stable or neutral night-time boundary layer (NBL).
To test the hypothesis, semi-idealised experiments are conducted. We simulate a one-month long release of chemically inert tracers within the Numerical Atmospheric Dispersion Environment (NAME) using different sets of numerical weather prediction (NWP) outputs. This enables a process-based evaluation of how different meteorology affects tracers within the BL. Tracers are released within the lateral boundaries of the domain centred on the UK. The idea is to separate the effects of meteorology from chemistry on the tracer concentrations. In particular, we want to understand the role of entrainment of -rich air masses from the residual layer down into the developing BL during the morning hours.
We located around 50 AURN sites in urban locations and compared hourly BL depths from June 2017 in the two sets of NWP output used for the tracer simulations: the UKV and UM Global (UMG) configurations of the Met Office Unified Model. It was found that although the average diurnal profiles of BL depth were quite similar, there was a lag in the morning increase of BL depth within the UMG configuration. This may be because the representation of surface sensible heat flux (SSHF) differs in the two NWP models: the UMG uses a single tile scheme to represent urban areas, whereas the UKV uses a more realistic, two-tile scheme (‘MORUSES’) which distinguishes between roof surfaces and street canyons. SSHF is a measure of energy exchange at the ground, where positive fluxes represent a loss of heat from the surface to the atmosphere. Therefore, a more realistic representation of SSHF results in the UKV being better at capturing and storing urban heat. This leads to a faster development of the BL depth in the UKV compared to the UMG, which in turn could mean that there is more turbulent motion and mixing within the atmosphere.
Assuming that the vertical gradient in pollutant concentrations is positive between the morning BL and the free troposphere, mixing air from above should enhance pollutant concentrations nearer to the surface. Our tracer results show that during days when synoptic conditions are dominated by high pressure, the diurnal cycle in forecast and observed surface pollutant concentrations can be adequately replicated by our simplified set-up. Differences between the diurnal cycle between tracer simulations with the two different meteorological set-ups show that the UKV is not only entraining more tracer from above the boundary layer than the simulation using UMG, but also the concentrations increase on average 1 – 2 hours earlier in the morning. These results suggest that indeed the model meteorology – in particular, representation of BL processes – is important to entrainment of polluted air masses into the BL, which in turn has a significant influence on the surface pollutant concentrations.
Within the past two decades, it has been recognised by the weather and air quality modelling communities that neither type of model can truly exist without the other. This post has discussed just one aspect of how meteorology influences the air quality forecast – there are, of course, many other parameters (e.g. wind speed, precipitation, relative humidity) which affect the forecast pollutant concentrations. We therefore also evaluated night-time errors in the wind speed and found that these errors are positively correlated with the total oxidant forecast errors. This means that when the wind speed forecast is overestimated, it is likely to affect the night-time and morning forecast of both and in a significant way.
Ambient Air Pollution: A global assessment of exposure and burden of disease. WHO, 2016.
Bohnenstengel S., Evans S., Clark P., Belcher S.: Simulations of the London urban heat island, Quarterly Journal of the Royal Meteorological Society, 2011 vol: 137 (659) pp: 1625-1640
Cocks A., 1993: The Chemistry and Deposition of Nitrogen Species in the Troposphere, The Royal Society of Chemistry, Cambridge 1993
Savage N., Agnew P., Davis L., Ordonez C., Thorpe R., Johnson C., O’Connor F., Dalvi M.: Air quality modelling using the Met Office Unified Model (AQUM OS24-26): model description and initial evaluation, Geoscientific Model Development, 2013 vol: 6 pp: 353-372
Sun J., Mahrt L., Banta R., Pichugina Y.: Turbulence Regimes and Turbulence Intermittency in the Stable Boundary Layer during CASES-99, Journal of the Atmospheric Sciences, 2012 vol: 69 (1) pp: 338-351
Zhang, 2008: Online-coupled meteorology and chemistry models: History, current status, and outlook. Atmos. Chem. Phys, 2008 vol: 8 (11) pp: 2895-2932
Forecasting lightning is a difficult problem due to the complexity of the lightning process and how dependent the lightning forecast is on the accuracy of the convective forecast. In order to verify forecasts of lightning independently of the accuracy of the convective forecast, it can be helpful to introduce a lightning scheme that is more complex and physically representative than the simple lightning parameterisations often used in Numerical Weather Prediction (NWP).
The existing method of predicting lightning in the Met Office’s Unified Model (MetUM) uses upwards graupel flux and total ice water path, based on the method of McCaul et al. (2009). However, this method tends to overpredict the total number and coverage of lighting, particularly in the UK.
I’ve implemented a physically based, explicit electrification scheme in the MetUM in order to try and improve the current lightning forecasts. The processes involved in the scheme are shown in the flowchart in Figure 1. The electrification scheme uses the Non-Inductive Charging (NIC) process to separate charge within thunderstorms (Mansell et al., 2005; Saunders and Peck, 1998). The NIC theory states that when graupel and ice crystals collide some charge is transferred from one particle to the other. The sign and the magnitude of the charge that is transferred to the graupel particle depends on a number of parameters. It is affected by the ice crystal diameter, the velocity of the collision, the liquid water content and the temperature at which the collision occurs. Once the charge has been generated on graupel and ice or snow particles, it can be moved around the model domain and can be transferred between hydrometeor species. Charge is removed from hydrometeor species and the domain when the hydrometeors precipitate to the surface or if the hydrometeor evaporates or sublimates. Charge is transferred between hydrometeor species proportionally to the mass that is transferred. Charge is held on graupel, rain and cloud ice (or aggregates and crystals if these are included separately).
Once these charged hydrometeors are distributed through the cloud, they can be totalled to create a charge density distribution. From this distribution the electric field can be calculated. Then from the electric field lightning flashes can be discharged. Lightning flashes are discharged based on two thresholds, the first of these is the initiation threshold and governs where the initiation point for the lightning channel should be (Marshall et al., 1995). The second of these is a propagation threshold and governs whether or not the lightning channel can move through a grid box (Barthe et al., 2012). Lightning channels are only allowed to propagate vertically within a grid column to simplify the model structure (Fierro et al., 2013). Once the channel is created charge is neutralised along the channel, charge is removed from hydrometeor species in both the channel and the grid points immediately adjacent to the channel.
The updated charge density distribution is then used to recalculate the electric field and new flashes are discharged from any points that exceed the electric field threshold. This process keeps repeating until no new lightning flashes are discharged within the domain.
The plots in Figure 2 show the charge on graupel (a), cloud ice (b), rain (c) and the total charge (d) for a small single cell thunderstorm in the south of the UK on the 31st August 2017. It can be seen in these figure that the charge is mainly positive on cloud ice and mainly negative on graupel. The cloud ice, being less dense is lofted towards the top of the thunderstorm, while the graupel being denser generally falls towards the bottom of the storm. This creates the charge structure seen in Fig. 2d, with two positive-negative dipoles. This charge structure allows for the development of strong electric fields between the positive and negative charge centres in each dipole. If the electric field between the charge centres reaches the order of 100s kVm-1 the air can become electrically conductive, causing lightning.
The electrification scheme was run within the operational configuration of the MetUM for a case study. The case study was a case of some organised and some single-cell, fair weather convection, on the 31st August 2017. The observations of lightning flashes are taken from the Met Office’s ATDNet lightning location system. The results of the total lighting accumulated for the entire day of the 31st August are shown in Figure 3. It can be easily seen that the existing method is producing far too much lightning compared to the observations. The new scheme is much closer to the observations.
It is an improvement, not only in the total lightning output, but also in the appearance of the lightning flash map. The scattered nature of the observations is captured by the new scheme, whereas the existing parameterisation appears to be largely producing lightning in neat, contoured paths. These paths show that the way that the existing parameterisation predicts lightning is not physically accurate and indicate the problem with the parameterisation, namely that it relies too heavily on the total ice water path. The new scheme suggests a possible improvement, in considering more explicitly the combination of graupel, liquid water and cloud ice that is vital for the production of charge and therefore lightning.
References: Barthe, C., Chong, M., Pinty, J.-P., and Escobar, J. (2012). CELLS v1.0: updated and parallelized version of an electrical scheme to simulate multiple electrified clouds and flashes over large domains. Geoscientific Model Development, (5), 167–184.
Fierro, A. O., Mansell, E. R., MacGorman, D. R., and Ziegler, C. L. (2013). The Implementation of an Explicit Charging and Discharge Lightning Scheme within the WRF-ARW Model: Benchmark Simulations of a Continental Squall Line, a Tropical Cyclone, and a Winter Storm. Monthly Weather Review, 141, 2390–2415.
Mansell, E. R., MacGorman, D. R., Ziegler, C. L., and Straka, J. M. (2005). Charge structure and lightning sensitivity in a simulated multicell thunderstorm. Journal of Geophysical Research, 110.
Marshall, T. C., McCarthy, M. P., and Rust, W. D. (1995). Electric field magnitudes and lightning initiation in thunderstorms. Journal of Geophysical Research, 100, 7097–7103.
McCaul, E. W., Goodman, S. J., LaCasse, K. M., and Cecil, D. J. (2009). Forecasting lightning threat using cloud-resolving model simulations. Weather and Forecasting, 24(3), 709–729.
Saunders, C. P. R. and Peck, S. L. (1998). Laboratory studies of the influence of the rime accretion rate on charge transfer during crystal / graupel collisions. Journal of Geophysical Research, 103, 949–13.
The Arctic has changed a lot over the last four decades. Arctic September sea ice extent has decreased rapidly from 1980-present by approximately 3.4 million square-kilometres (see Figure 1). This has made the Arctic more accessible for human activities such as shipping, oil exploration and tourism. As Arctic sea ice is expected to continue to decline in the future, human activity in the Arctic is expected to continue to increase. This will increase the exposure to hazardous weather conditions, such as high winds and high waves, which are associated with Arctic storms. However, the characteristics of Arctic storms are currently not well understood.
One way to investigate current Arctic storm characteristics is to analyse storms in global reanalysis datasets. Reanalysis datasets combine past observations with current weather models to produce spatially and temporally homogeneous datasets, that contain atmospheric data at grid-points around the world at constant time intervals (typically every 6-hours) per day from 1979-present (for the modern, satellite-era reanalyses). Typically, a storm tracking algorithm is used to efficiently process all of the 6-hour data in the reanalysis datasets from 1979 (60,088 time steps!) to identify all of the storms that may have occurred in the past. Storms can be identified in the mean sea level pressure (MSLP) field (as low pressure systems), or in the relative vorticity field (as large rotating systems). The relative vorticity field at 850 hPa (higher in the atmosphere than the atmospheric boundary layer) is typically used so that the field is less influenced by boundary layer processes that may produce areas of high relative vorticity.
At the moment, atmospheric scientists are spoilt for choice when it comes to choosing a reanalysis dataset to analyse. There are reanalysis datasets from multiple institutions; the European Centre for Medium Range Weather Forecasts (ECMWF), the Japanese Meteorological Agency (JMA), the National Aeronautics and Space Administration (NASA), and the National Centers for Environmental Prediction (NCEP). Each institution has created their reanalysis dataset in a slightly different way, by using their own numerical weather prediction model and data assimilation systems. Atmospheric scientists also have to choose whether to use the MSLP field or 850 hPa relative vorticity field when applying their storm tracking algorithm to the reanalysis datasets.
In my recent paper, I aimed to assess Arctic storm characteristics in the multiple reanalysis datasets currently available (ERA-Interim, JRA-55, MERRA-2 and NCEP-CFSR), using a storm tracking algorithm based on 850 hPa relative vorticity and MSLP fields. Below is a short summary of some of the results from the paper.
Despite the Arctic environment changing dramatically over the last four decades, we find that there has been no change in the frequency and intensity of Arctic storms in all the reanalysis datasets compared in this study. It was found in preceding, older versions of atmospheric reanalysis datasets that Arctic storm frequency had increased from 1949-2002 (Walsh. 2008 and Sepp & Jaagus. 2011). This is in contrast with results from the modern reanalysis datasets (from this study, and Simmonds et al. 2008, Serreze and Barrett. 2008 and Zahn et al. 2018) which show no increase in Arctic storm frequency.
Across all the reanalysis datasets, some robust characteristics of Arctic storms were found. For example, the spatial distribution of Arctic storms is found to be seasonally dependent. In winter (DJF), Arctic storm track density is highest over the Greenland, Norwegian and Barents Seas, whereas in summer (JJA), Arctic storm track density is highest over and north of the Eurasia coastline (a region known as the Arctic Frontal Zone (Reed & Kunkel. 1960)) (see Figure 2). The number of trans-Arctic ships in summer is much higher than in winter, and these ships typically use the Northern Sea Route to travel between Europe and Asia (along the coastline of Eurasia). Figure 2b shows that this in fact is where most of the summer Arctic storms occur. In addition, the reanalysis datasets show that ~50% of Arctic storms have genesis in mid-latitude regions (south of 65°N) and travel northwards into the Arctic (north of 65°N). This shows that storms are a significant mechanism for transporting air from low to high latitudes.
In general, there is less consistency in Arctic storm characteristics in winter than in summer. This may be because in winter, the occurrence of meteorological conditions such as low level cloud, stable boundary layers and polar night that are more frequent, which are more challenging to represent in numerical weather prediction models, and for the creation of reanalysis datasets. In addition, there is a low density of conventional observations in winter, and difficulties in identifying cloud and estimating emissivity over snow and ice limit the current use of infrared and microwave satellite data in the troposphere (Jung et al. 2016).
The differences between the reanalysis datasets in Arctic storm frequency per season in winter (DJF) and summer (JJA) (1980-2017) were found to be less than 6 storms per season. On the other hand, the differences in Arctic storm frequency per season between storms identified by a storm tracking algorithm based on 850 hPa relative vorticity and MSLP were found to be 55 storms per season in winter, and 33 storms per season in summer. This shows that the decision to use 850 hPa relative vorticity or MSLP for storm tracking can be more important that the choice of reanalysis dataset.
I finished my PhD last year, and since the start of this year I’ve been doing something rather different. Courtesy of SCENARIO DTP funding, I am doing a 3-month post-doc placement with JBA Consulting in Skipton, North Yorkshire. After spending 3.5 years researching in an academic setting, it is great to be able to apply my knowledge to real-world problems.
Working in industry has a very different feel to working in academia. The science being done has an immediate purpose for the company, rather than being done purely to extend knowledge. In the case of my placement, the work that I am doing is ultimately to benefit the end users of the product.
The field that I am now working in is rather far removed from my PhD project: I have gone from gravity waves to surface water flooding. Whilst it has been quite a steep learning curve to bring myself up to speed with the current science in this area, it is great to branch out. I would urge anyone interested in doing an industrial placement not to be put off by going outside of your subject area. You might find something else that suits you better. It might even be the best step you ever make.
The choosing and setting up of the placement has all been fairly easy for me. SCENARIO had a range of placements available and I chose the one that most interested me. I had to send an application to the company, who then called me for an interview. Once they decided to offer me the placement, SCENARIO did the setting up with both JBA and the university. All I needed to worry about was finding accommodation for the 3 months.
To anyone considering doing an industrial placement: do it! I am currently 3 weeks in and have really enjoyed it so far. Everybody has been welcoming and helpful. I felt like part of the team by the end of my first day.