Evaluating aerosol forecasts in London

Email: e.l.warren@pgr.reading.ac.uk

Aerosols in urban areas can greatly impact visibility, radiation budgets and our health (Chen et al., 2015). Aerosols make up the liquid and solid particles in the air that, alongside noxious gases like nitrogen dioxide, are the pollution in cities that we often hear about on the news – breaking safety limits in cities across the globe from London to Beijing. Air quality researchers try to monitor and predict aerosols, to inform local councils so they can plan and reduce local emissions.

Figure 1: Smog over London (Evening Standard, 2016).

Recently, large numbers of LiDARs (Light Detection and Ranging) have been deployed across Europe, and elsewhere – in part to observe aerosols. They effectively shoot beams of light into the atmosphere, which reflect off atmospheric constituents like aerosols. From each beam, many measurements of reflectance are taken very quickly over time – and as light travels further with more time, an entire profile of reflectance can be constructed. As the penetration of light into the atmosphere decreases with distance, the reflected light is usually commonly called attenuated backscatter (β). In urban areas, measurements away from the surface like these are sorely needed (Barlow, 2014), so these instruments could be extremely useful. When it comes to predicting aerosols, numerical weather prediction (NWP) models are increasingly being considered as an option. However, the models themselves are very computationally expensive to run so they tend to only have a simple representation of aerosol. For example, for explicitly resolved aerosol, the Met Office UKV model (1.5 km) just has a dry mass of aerosol [kg kg-1] (Clark et al., 2008). That’s all. It gets transported around by the model dynamics, but any other aerosol characteristics, from size to number, need to be parameterised from the mass, to limit computation costs. However, how do we know if the estimates of aerosol from the model are actually correct? A direct comparison between NWP aerosol and β is not possible because fundamentally, they are different variables – so to bridge the gap, a forward operator is needed.

In my PhD I helped develop such a forward operator (aerFO, Warren et al., 2018). It’s a model that takes aerosol mass (and relative humidity) from NWP model output, and estimates what the attenuated backscatter would be as a result (βm). From this, βm could be directly compared to βo and the NWP aerosol output evaluated (e.g. see if the aerosol is too high or low). The aerFO was also made to be computationally cheap and flexible, so if you had more information than just the mass, the aerFO would be able to use it!

Among the aerFO’s several uses (Warren et al., 2018, n.d.), was the evaluation of NWP model output. Figure 2 shows the aerFO in action with a comparison between βm and observed attenuated backscatter (βo) measured at 905 nm from a ceilometer (a type of LiDAR) on 14th April 2015 at Marylebone Road in London. βm was far too high in the morning on this day. We found that the original scheme the UKV used to parameterise the urban surface effects in London was leading to a persistent cold bias in the morning. The cold bias would lead to a high relative humidity, so consequently the aerFO condensed more water than necessary, onto the aerosol particles as a result, causing them to swell up too much. As a result, bigger particles mean bigger βm and an overestimation. Not only was the relative humidity too high, the boundary layer in the NWP model was developing too late in the day as well. Normally, when the surface warms up enough, convection starts, which acts to mix aerosol up in the boundary layer and dilute it near the surface. However, the cold bias delayed this boundary layer development, so the aerosol concentration near the surface remained high for too long. More mass led to the aerFO parameterising larger sizes and total numbers of particles, so overestimated βm. This cold bias effect was reflected across several cases using the old scheme but was notably smaller for cases using a newer urban surface scheme called MORUSES (Met Office – Reading Urban Surface Exchange Scheme). One of the main aims for MORUSES was to improve the representation of energy transfer in urban areas, and at least to us it seemed like it was doing a better job!

Figure 2: Vertical profiles of attenuated backscatter [m−1 sr−1] (log scale) that are (a, g) observed (βo) with estimated mixing layer height (red crosses, Kotthaus and Grimmond,2018) and (b, h) forward modelled (βm) using the aerFO (section 2).(c, i) Attenuated backscatter difference (βm – βo) calculated using the hourly βm vertical profile and the vertical profile of βo nearest in time; (d, j) aerosol mass mixing ratio (m) [μg kg−1]; (e, k) relative humidity (RH) [%] and (f, l) air temperature (T) [°C] at MR on 14th April 2015.

References

Barlow, J.F., 2014. Progress in observing and modelling the urban boundary layer. Urban Clim. 10, 216–240. https://doi.org/10.1016/j.uclim.2014.03.011

Chen, C.H., Chan, C.C., Chen, B.Y., Cheng, T.J., Leon Guo, Y., 2015. Effects of particulate air pollution and ozone on lung function in non-asthmatic children. Environ. Res. 137, 40–48. https://doi.org/10.1016/j.envres.2014.11.021

Clark, P.A., Harcourt, S.A., Macpherson, B., Mathison, C.T., Cusack, S., Naylor, M., 2008. Prediction of visibility and aerosol within the operational Met Office Unified Model. I: Model formulation and variational assimilation. Q. J. R. Meteorol. Soc. 134, 1801–1816. https://doi.org/10.1002/qj.318

Warren, E., Charlton-Perez, C., Kotthaus, S., Lean, H., Ballard, S., Hopkin, E., Grimmond, S., 2018. Evaluation of forward-modelled attenuated backscatter using an urban ceilometer network in London under clear-sky conditions. Atmos. Environ. 191, 532–547. https://doi.org/10.1016/j.atmosenv.2018.04.045

Warren, E., Charlton-Perez, C., Kotthaus, S., Marenco, F., Ryder, C., Johnson, B., Lean, H., Ballard, S., Grimmond, S., n.d. Observed aerosol characteristics to improve forward-modelled attenuated backscatter. Atmos. Environ. Submitted


Is our “ECO mode” hot water boiler eco-friendly?

A lesson in practical thermodynamics.

Maarten Ambaum (m.h.p.ambaum@reading.ac.uk)
Mark Prosser (m.prosser@pgr.reading.ac.uk)

Everybody knows that the key boundary condition for a successful PhD is the provision of plenty of coffee during the day (tea, for some). Our Department has a hot water boiler with a 10 litre water tank capacity to provide an unlimited supply of hot water (it is connected to the tap to keep it topped up automatically). For historical reasons we actually call it the “urn” – I like that word so we stick to it here. 

When we got a new urn recently (a “Marco Ecoboiler T10”) we were intrigued to see it had an ECO mode button, presumably promising a lower energy consumption. Indeed, when anyone in the morning saw that the urn was not in “ECO mode”, it was swiftly switched on; green credentials and all that. 

One of our postdocs dug up the specs from the internet, where we learned that “ECO mode” actually makes the urn operate with 5 litres of water, which is half full. The specs suggest that when switching the urn on you then only need to heat half the amount of water. But is there more to it? Would the urn working with a half-full tank actually use less energy? 

I teach atmospheric physics to our Masters and PhD students and this is precisely the kind of question I would ask them to think about. In fact I sent out an email to all members of the Department, and it turned out that there were different opinions, even amongst those who should know better, although in my view obviously only one physically correct outcome. 

So, let us find out. First some theory (some basic thermodynamics), then experiment, and conclusions at the end. 

One of the first things we learn in thermodynamics is conservation of energy: energy in equals energy out. The energy in is the electrical power that the urn uses, the energy out is the hot water we consume, heated up from around 15C to around 95C, as well as thermal losses, and running the internal electronics of the urn. The last bit is very marginal, just a controller and a few LEDs. We are going to ignore that. The thermal loss may well be substantial, but the water tank of the urn is actually quite well insulated with Styrofoam, so who knows. 

Given that we drink the same amount of coffee, whether the urn is in ECO mode or not, the energy cost for producing the hot water does not depend on whether we run at half tank capacity or full tank capacity. We still need to heat up the same amount of water for our coffee consumption. 

What is left is the energy loss. But the energy loss is proportional to the temperature difference between the inside of the tank and the outside. The inside of the tank remains close to 95C all the time, so it looks like the energy loss also cannot depend on whether we are in ECO mode or not. 

Energy in equals energy out, energy out remains the same, so energy in should remain the same, ECO mode or not. 

Did we miss something? Surely, a feature that is advertised as ECO mode should consume less energy? 

We should give the manufacturer some credit. They claim: “This mode saves energy by minimising the energy wasted during machine down-time. The ECO mode is most effective in installations where the machine has a regular ‘off’ period.” Perhaps; perhaps not. 

Unfortunately they also claim: “During the ‘off’ period as there is less water in the tank there will be less energy lost to the surrounding environment resulting in an energy saving.” This latter claim is a tricky one: Energy loss is proportional to the temperature difference between the tank and the exterior irrespective of how much water is in the tank. As the heat capacity of the full tank is higher, it will reduce its temperature more slowly, possibly leading to a higher total energy loss, as the temperature differential is kept higher on average for a full tank. So after switching on the urn again, this increased energy loss needs to be topped up. Is that then the way ECO mode helps us being green? 

We did what any scientist would do, faced with such a question: do the experiment; this is where Mark comes in. Easy enough: these days you can buy power adaptors that plug in the wall socket and accumulate the total amount of electrical energy used over some period. 

We did four experiments: two midweek ones running for three consecutive 24 hour periods from Tuesday to Thursday, two weekend ones running from 6pm on Friday to 9am on Monday. In half of the experiments we left the ECO mode button on, and in the other half, the ECO mode button was left switched off. 

Straight to the results: 

ECO mode Non-ECO mode
Midweek (3 days) 21.77 KWh 20.25 KWh
Weekends 4.2 KWh 4.05 KWh

Lo and behold: it does not make much difference at all and, if anything, ECO mode uses more energy! 

Of course the experiment is not carefully controlled: perhaps we drank more coffee during the ECO mode periods, but both weeks were pretty similar in coffee room usage, there were no big events, and the two weekends were pretty much completely quiet. In fact the weekend usage is probably dominated by the usage before 9am on a Monday. We have cleaners that come in very early, and there are quite a few members of staff that come in before nine in the morning, and perhaps even some PhD students! 

Let’s do some more analysis of the data: daily normal usage is about 7KWh per day, as in the midweek data. That means that from the 4.1KWh weekend usage less than about 1KWh (about one seventh of a normal day’s usage to account for the Monday am usage – I know it is a rough estimate) corresponds to normal usage, and the rest is energy loss when the urn is switched on but not used. I estimate the loss to be 1.7KWh per day, so that a weekend, including the Monday early rush hour, corresponds to about 3.4KWh losses and about 0.7KWh normal usage. 

So, from the 7KWh daily energy usage, about 1.7KWh is thermal energy loss (and other bits and bobs, such as the lovely LEDs at the front of the urn), with an error bar, I guess, of possibly 30%. Is this a lot of energy loss? 1.7KWh per day corresponds to 70W loss, about the same as the lighting of a single-person office. Not bad. The Marco Ecoboiler is probably pretty “eco”, but not because of its ECO mode. 

We are then left with 5.3KWh each day to make coffee. A coffee cup is about 200ml, and assuming the water for the coffee needs heating from 15C to 95C, each cup of coffee requires 0.2kg x 80K x 4200 J/kg/K = 67KJ of energy, or 0.019KWh. That means that 5.3KWh corresponds to about 280 cups of coffee per day. Probably quite realistic, given the size of our Department. 

Should we switch off the urn overnight? Well, an overnight period (all losses, as there is no usage of the urn, perhaps for about 11 hours) would use about 0.8KWh. But, of course, the tank will have cooled down, perhaps to 30C, and needs reheating to 95C. This costs for a 10 litre tank about 0.8KWh. Funny that is: probably better to just leave the tank on overnight to prevent people from using highly inefficient kitchen kettles, and prevent people from having to wait for the urn to heat up in the morning. 

Actually, this is not as much coincidence as it may seem: the thermal loss during the night switch off period must of course equal the loss in thermal energy of the water, which then needs to be replenished when we reheat the water back to 95C. 

As I said before, the full tank could well lose more energy as it keeps relatively warmer during the cooling off period compared to the half full tank of ECO mode. But a quick calculation, assuming a well-insulated tank, shows that the temperature reduction is proportional to (T0-Te) / k with T0 the initial tank temperature (95C), Te the external temperature, and k the heat capacity of the tank. So, indeed, a full tank, with larger k, has a smaller temperature reduction with time, and remains warmer on average. But the energy cost of this reduction of course equals the heat capacity k times the change in temperature: k x (T0-Te) / k = (T0-Te), so we get an energy loss proportional to (T0-Te), but independent of the heat capacity k of the tank. It looks like the engineers at the manufacturers overlooked some basic physics. 

By the way: how long would it take to reheat the tank in the morning if it had cooled down to 30C overnight? Well, at full pelt the urn uses 2.8KW, so a required energy of 0.8KWh takes about 15 minutes to produce. Pretty long wait. Probably not worth the frustration. 

So, to conclude: our Ecoboiler is quite “eco”: it wastes only about 70W in thermal losses, not so bad for a Department that uses big computing resources (not so “eco”).  

The thermal energy losses from the urn are pretty modest in the grand scheme of things, and it turns out to be better to just leave the urn on overnight, as the cost of reheating the cold urn in the morning is nearly the same as the energy cost of leaving it on. Leaving the urn on over the weekend is probably also better than switching off, because the occasional weekend user will end up using a highly inefficient kitchen kettle. 

The “ECO mode” button makes the urn operate at half tank capacity, but the thermodynamical arguments as well as the measurements show that it actually uses at least the same amount of energy in ECO mode. In fact, at half capacity the tank has more steam in it, and the steam is possibly slightly hotter, on average, than the liquid, and thus more energy may be lost through conduction. Just leave the ECO mode button switched off; it doesn’t do any good. 


Quantifying the skill of convection-permitting ensemble forecasts for the sea-breeze occurrence

Email: carlo.cafaro@pgr.reading.ac.uk

On the afternoon of 16th August 2004, the village of Boscastle on the north coast of Cornwall was severely damaged by flooding (Golding et al., 2005). This is one example of high impact hazardous weather associated with small meso- and convective-scale weather phenomena, the prediction of which can be uncertain even a few hours ahead (Lorenz, 1969; Hohenegger and Schar, 2007). Taking advantage of the increased computer power (e.g. https://www.metoffice.gov.uk/research/technology/supercomputer) this has motivated many operational and research forecasting centres to introduce convection-permitting ensemble prediction systems (CP-EPSs), in order to give timely weather warnings of severe weather.

However, despite being an exciting new forecasting technology, CP-EPSs place a heavy burden on the computational resources of forecasting centres. They are usually run on limited areas with initial and boundary conditions provided by global lower resolution ensembles (LR-EPS). They also produce large amounts of data which needs to be rapidly digested and utilized by operational forecasters. Assessing whether the convective-scale ensemble is likely to provide useful additional information is key to successful real-time utilisation of this data. Similarly, knowing where equivalent information can be gained (even if partially) from LR-EPS using statistical/dynamical post-processing both extends lead time (due to faster production time) and also potentially provides information in regions where no convective-scale ensemble is available.

There have been many studies on the verification of CP-EPSs (Klasa et al., 2018, Hagelin et al., 2017, Barret et al., 2016, Beck et al., 2016 amongst the others), but none of them has dealt with the quantification of the skill gained by CP-EPSs in comparison with LR-EPSs, when fully exploited, for specific weather phenomena and for a long enough evaluation period.

In my PhD, I have focused on the sea-breeze phenomenon for different reasons:

  1. Sea breezes have an impact on air quality by advecting pollutants, on heat stress by providing a relief on hot days and also on convection by providing a trigger, especially when interacting with other mesoscale flows (see for examples figure 1 or figures 6, 7 in Golding et al., 2005).
  2. Sea breezes occur on small spatio-temporal scales which are properly resolved at convection-permitting resolutions, but their occurrence is still influenced by synoptic-scale conditions, which are resolved by the global LR-EPS.

Blog_Figure1
Figure 1: MODIS visible of the southeast of Italy on 6th June 2018, 1020 UTC. This shows thunderstorms occurring in the middle of the peninsula, probably triggered by sea-breezes.
Source: worldview.earthdata.nasa.gov

Therefore this study aims to investigate whether the sea breeze is predictable by only knowing a few predictors or whether the better representation of fine-scale structures (e.g. orography, topography) by the CP-EPS implies a better sea-breeze prediction.

In order to estimate probabilistic forecasts from both the models, two different methods have been applied. A novel tracking algorithm for the identification of sea-breeze front, in the domain represented in figure 2, was applied to CP-EPSs data. A Bayesian model was used instead to estimate the probability of sea-breeze conditioned on two LR-EPSs predictors and trained on CP-EPSs data. More details can be found in Cafaro et al. (2018).

Cafaro_Fig2
Figure 2: A map showing the orography over the south UK domain. Orography data are from MOGREPS-UK. The solid box encloses the sub-domain used in this study with red dots indicating the location of synoptic weather stations. Source: Cafaro et al. (2018)

The results of the probabilistic verification are shown in figure 3. Reliability (REL) and resolution (RES) terms have been computed decomposing the Brier score (BS) and Information gain (IGN) score. Finally, scores differences (BSD and IG) have been computed to quantify any gain in the skill by the CP-EPS. Figure 3 shows that CP-EPS forecast is significantly more skilful than the Bayesian forecast. Nevertheless, the Bayesian forecast has more resolution than a climatological forecast (figure 3e,f), which has no resolution by construction.

Cafaro_Fig11
Figure 3: (a)-(d) Reliability and resolution terms calculated for both the forecasts (green for the CP-EPS forecast and blue for LR-EPSs). (e) and (f) represent the Brier score difference (BSD) and Information gain (IG) respectively. Error bars represent the 95th confidence interval. Positive values of BSD and IG indicate that CP-EPS forecast is more skilful. Source: Cafaro et al. (2018)

This study shows the additional skill provided by the Met Office convection-permitting ensemble forecast for the sea-breeze prediction. The ability of CP-EPSs to resolve meso-scale dynamical features is thus proven to be important and only two large-scale predictors, relevant for the sea-breeze, are not sufficient for skilful prediction.

It is believed that both the methodologies can, in principle, be applied to other locations of the world and it is thus hoped they could be used operationally.

References:

Barrett, A. I., Gray, S. L., Kirshbaum, D. J., Roberts, N. M., Schultz, D. M., and Fairman J. G. (2016). The utility of convection-permitting ensembles for the prediction of stationary convective bands. Monthly Weather Review, 144(3):1093–1114, doi: 10.1175/MWR-D-15-0148.1

Beck,  J., Bouttier, F., Wiegand, L., Gebhardt, C., Eagle, C., and Roberts, N. (2016). Development and verification of two convection-allowing multi-model ensembles over Western europe. Quarterly Journal of the Royal Meteorological Society, 142(700):2808–2826, doi: 10.1002/qj.2870

Cafaro C., Frame T. H. A., Methven J., Roberts N. and Broecker J. (2018), The added value of convection-permitting ensemble forecasts of sea breeze compared to a Bayesian forecast driven by the global ensemble, Quarterly Journal of the Royal Meteorological Society., under review.

Golding, B. , Clark, P. and May, B. (2005), The Boscastle flood: Meteorological analysis of the conditions leading to flooding on 16 August 2004. Weather, 60: 230-235, doi: 10.1256/wea.71.05

Hagelin, S., Son, J., Swinbank, R., McCabe, A., Roberts, N., and Tennant, W. (2017). The Met Office convective-scale ensemble, MOGREPS-UK. Quarterly Journal of the Royal Meteorological Society, 143(708):2846–2861, doi: 10.1002/qj.3135

Hohenegger, C. and Schar, C. (2007). Atmospheric predictability at synoptic versus cloud-resolving scales. Bulletin of the American Meteorological Society, 88(11):1783–1794, doi: 10.1175/BAMS-88-11-1783

Klasa, C., Arpagaus, M., Walser, A., and Wernli, H. (2018). An evaluation of the convection-permitting ensemble cosmo-e for three contrasting precipitation events in Switzerland. Quarterly Journal of the Royal Meteorological Society, 144(712):744–764, doi: 10.1002/qj.3245

Lorenz, E. N. (1969). Predictability of a flow which possesses many scales of motion.Tellus, 21:289 – 307, doi: 10.1111/j.2153-3490.1969.tb00444.x

Going Part-time…

Email: r.f.couchman-crook@pgr.reading.ac.uk

**Scroll to the bottom for picture of a bearded dragon.**

A full-time PhD is not always what you see yourself doing. Perhaps you don’t like the idea of being an academic, going through the realities of post-doc life, and battling for the few research roles out there. Maybe you want to get a job in industry, but keep your hand in the research pool. Maybe you have other commitments, meaning that your time is limited but you want to still learn and build your research skills. Whatever the reason, there is always an option to go part-time.

After doing a year and a bit full-time, I knew I wanted to work outside of academia in something more practical than an office-based PhD. Wanting to make use of the work I’d already started, myself, my supervisors and my funders agreed that a part-time MPhil gave the outcomes that all parties wanted. It means I can finish my studies sooner and have something tangible for the years of study, but it also provides new research into my topic that can be used by subsequent researchers.

But how to broach the subject in the first place? You need to take a bit of time to look at the reasons why you want to change, but not so long that you end up regretting never actually saying how you’re feeling at least. It’s really important at this stage that you assess your options, and think about the practicalities, like how it will affect your funding.

It is important to work out how your new schedule will fit together. Part-time doesn’t mean a few hours a week, it means half of what a full-time PhD student would do. With my hours, it means I do 12 hours a week and then work during school holidays. Realistically I won’t get much time off, but it is workable into a roughly 8-6 schedule. It’s important to keep your weekends as free as possible, because social time will help keen you sane!

And in terms of touching base with your supervisor, for me that means coming in once a fortnight, and keeping a record of everything I’ve been up to each day, so I know exactly where I am on my project objectives. You and your supervisor need to be realistic about how much you can complete in a given time, and that your work won’t happen as quickly, so regulating expectations is important. And if things aren’t working, then it’s important to look at them again, perhaps with the help of your Monitoring Committee, to keep you on top of your work.

It’s also important to learn to say no – anyone who knows me knows I struggle with this! People might be under the impression that you have more time to take on other stuff now that you’re part-time, but you have to know what you can make time for in your schedule (like writing a short blog), what might bring other benefits (little bit of open day volunteering), and what really isn’t your problem to worry about!

Having gone part-time, a lot of the stresses seem to have relaxed; it’s nice to not feel like the PhD is all-consuming, and I’m finding it easier to manage my targets each fortnight. If anything, knowing I only have a limited window for work seems to increase productivity! And my job as a lab technician now means I’m gaining a whole other range of skills, can leave that work at work, and make friends with a whole host of school reptiles!

49811056_2280248972260686_4358824680778366976_n

My tips, strategies and hacks as a PhD student

Email: m.prosser@pgr.reading.ac.uk

Having been a PhD student for a little over 3 months I am perhaps ill-qualified to write such a ‘PhD tips’ type of blog post, but write one I appear to be doing! It’s probably actually more accurately titled ‘study tips in general but ones which are highly relevant to science PhDs.’

The following are just my tips on what have helped me over the course of my studies and may be obvious or not suitable for others, but I write them on the off-chance that something here is useful to someone out there. No doubt I will have many more such strategies by the end of my time here in Reading!

Papers and articles
As a science student you may have encountered these from time to time. The better ones are clearly written and succinct, the worse ones are verbose and obscurantist. If you’re not the quickest reader in the world, getting through papers can end up consuming a great deal of your time.

I’m going to advocate speed reading in a bit but when you start learning speed reading, one of the things they ask you to think about first is “Do I really need to read this?”. If the answer is yes, then the next question is “Do I really need to read all of it?”. Perhaps you only need to glance at just the abstract, figures and conclusion? After all, time spent reading this is time not spent doing something else, something more profitable perhaps, so do check that it really is worth your time before diving in.

So once I’ve ascertained that the article is indeed worth my time, I sit down with a pencil (or the equivalent for a PDF) and read through the sections I’ve decided on. Anything that makes my neurons spike (“oh that’s interesting….”), I underline or highlight. Any thoughts or questions that occur to me, I write in the margin. If I feel the need to criticise the paper for being insufficiently clear then I write down these remarks, too.

Once I get to the end, I put the article away out of sight and sit down with a blank piece of paper (or on a computer) and try and write something very informally about what I’ve just read. Quite often my mind will go helpfully blank at this point, so I try and finish the following sentence: “The biggest thing (if anything) I learned from this article was….”. Completing this one sentence then tends to lead to other stuff tumbling out and in no particular order I jot these all down. Only once the majority of it is down on paper do I take a peek at the annotated piece to see what I missed (For heaven’s sake avoid painting the article yellow with a highlighter!)

Please, please, please, don’t.

This personal blurb that you have produced is then a good way to quickly remind yourself of the contents of that article in the future without having to reread it from scratch. This post-reading exercise need not take more than 15 minutes but if you’re worried about spending this extra time, don’t be. You’ll save yourself a heap of time in future by not having to reread the damn thing.

Random piece of advice – if you are unaware of the Encyclopedia of Atmospheric Sciences, then check it out. Whatever your PhD topic I guarantee there’ll be 10 or so shortish entries which are all highly relevant to your particular PhD topic and consequently worth knowing about!

Speed reading
Really still on the previous paragraph but as is often the way, between the valuable articles that you really should be reading and the stuff for which life’s really too short there’s a grey area.
For such grey areas I am an advocate of speed reading.
For any electronic texts check out this free website:

Just copy, paste and go! https://accelareader.com/

The pace the words flash up doesn’t have to be particularly fast (I suggest trying 300 wpm to start with) but the golden rule is to never press pause once you’ve started. No going back to read stuff you’ve missed (well not until you’ve reached the end first at least!). This method of reading is especially useful for any articles that feel like quagmires into which you are slowly drowning. Paradoxically reading faster in such instances often increases one’s comprehension.

A good way to develop the skill of speed reading is to start on articles you see posted on social media, articles that you are not too fussed about getting every single detail. Just let it wash over you!

Talks and lectures
I have found it useful to make audio recordings of these. I don’t usually tend to listen back, but if there is something that was particularly interesting or dense that might be worth revisiting then it can be very worthwhile. I make a note of the time this something was said at the time it was said and can thus track it down in the recording fairly painlessly afterwards.

One tip about note taking that has stayed with me since I first heard it several years back was the following: after writing down the title, only make notes on what is surprising or interesting to you, just that! This may result in many lines of notes or no lines at all, but whatever you do, don’t just make notes of everything that was said. This advice has been very useful for me.

Organising
Ask me in person if you would like to know my thoughts on this.

Programming to help physical intuition.
This is probably more relevant to students like me who didn’t come from a maths or physics undergrad and consequently aren’t quite as fluent in the old maths….or perhaps undergrads for that matter…
….but in my undergrad (environmental science) I spent quite a lot of the time spent studying maths (and to a lesser extent) physics involved memorising complicated procedures. The best example of this was a lecture on Fourier Series where the professor took the whole hour to work through the process of getting from an input (x^2) to the output (first n terms of the Fourier series). Because it took so much space/effort for me to remember this lengthy process, it ended up crowding out the arguably more important conceptual stuff, such as what a Fourier series actually does and why it is it so useful. When all is said and done and the final exam is handed in, these concepts are what should (ideally) stick with you even if the details of how, don’t.
So here’s where I think programming can come in. Firstly, there’s nothing like coding up some process to check whether you understand the nuts and bolts of it, but more importantly once it has been coded up properly you can then play about with the inputs to see how these affect the graphed outputs. Being able to ‘play’ about like this gives you a more intuitive feel for the model/process that wouldn’t be possible if you had to manually redo the laborious calculations each time you wanted to change the input parameters. 3 examples of where I have done this myself are the following:
1. Getting my head around the thermal inertia of the oceans by varying the depth of the surface and deep layer of the ocean in a simple model.
2. Playing around graphically with dispersion.
3. Convincing myself that it really is true that in the middle of the Northern Hemisphere summer the north pole receives more energy per day than the equator.

And you?
So do you have any hard won study/research tips? If so do email me as I would be interested in hearing about them!
Which study hack do you think I (or others) are most lacking?

AMS Annual Meeting 2019

Email: l.p.blunn@pgr.reading.ac.uk

Between 6th-10th January 2019 I was fortunate enough to attend the 99th American Meteorological Society (AMS) Annual Meeting in their centennial year. It was hosted in the Phoenix, Arizona Convention Center – its vast size was a necessity, seeing as there were 2300 oral presentations and 1100 poster presentations given in 460 sessions! The conferences and symposia covered a wide range of topics such as space weather, hydrology, atmospheric chemistry, climate, meteorological observations and instrumentation, tropical cyclones, monsoons and mesoscale meteorology.

pic1
Me outside one half of Phoenix Convention Center.

pic2
1500 people at the awards banquet.

The theme of this year’s meeting was “Understanding and Building Resilience to Extreme Events by Being Interdisciplinary, International, and Inclusive”. The cost of extreme events has been shown by reinsurance companies to have increased monotonically, with estimated costs for 2017 of $306 billion and 350 lives in the US. Marcia McNutt, President of the National Academy of Science (NAS), gave a town hall talk on the continued importance of evidence-based science in society (view recording). She says that NAS must become more agile at giving advice since the timescales of, for example, hurricanes and poor air quality episodes are very short, but the problems are very complex. There is reason for optimism though, as the new director of the White House Office of Science and Technology Policy is Kelvin Droegemeier, a meteorologist who formerly served as Vice President for Research at the University of Oklahoma.

“Building Resilience to Extreme Events” took on another meaning with the federal shutdown and proved to be the main talking point of this year’s annual meeting. Over 500 people from federally funded organisations such as NOAA could not attend. David Goldston, director of the MIT Washington Office, gave a talk at the presidential forum entitled “Building Resilience to Extreme Political Weather: Advice for Unpredictable Times” (view recording). He made the analogy of both current US political attitude towards climate change and the federal shutdown as being ‘weather’, and thought that politics would return to long-term ‘climate’. He advised scientists to present their facts in a way understandable to public and government, prepare policy proposals, and be clear on why they are not biased. He reassured scientists by saying they have outstanding public support with 76% of the public thinking scientists act in their best interest. During the talk questions were sourced from the audience and could be voted on. The frustration of US scientists with the government was evidently large.

pic3pic4pic5

Questions put forward by the audience and associated votes during Goldston’s talk.

pic6
Ross Herbert (a PDRA in the Reading Meteorology Department) letting his feelings on the federal shutdown be known at the University of Oklahoma after-party.

A growing area of research is artificial and computational intelligence which had its own dedicated conference. As an early career researcher in urban and boundary layer meteorology I was interested to see a talk on “Surface Layer Flux Machine Learning Parametrisations”. By obtaining training data from observational towers it may be possible to improve upon Monin-Obukhov similarity theory in heterogeneous conditions. At the atmospheric chemistry and aerosol keynote talk by Zhanqing Li I learnt that anthropogenic emissions of aerosol can cause a feedback leading to elevated concentration of pollutants. Aerosol reduces solar radiation reaching the surface leading to less turbulence and therefore lower boundary layer height. It also causes warming at the top of the boundary layer creating a stronger capping inversion which inhibits ventilation. Anthropogenic aerosols are not just important for air quality. They affect global warming via their influence on the radiation budget and can lead to more extreme weather through enhancing deep convection.

I particularly enjoyed the poster sessions since they enabled networking with many scientists working in my area. On the first day I bumped into several Reading meteorology undergraduates on their year long exchange at the University of Oklahoma. Like me, I think they were amazed by the scale of the conference and the number of opportunities available as a meteorologist. The exhibition had over 100 organisations showcasing a wide range of products, publications and services. Anemoment (producers of lightweight, compact 3D ultrasonic anemometers) and the University of Oklahoma had stalls showing how instruments attached to drones can be used to profile the boundary layer. This has numerous possible applications such as air quality monitoring and analysing boundary layer dynamics.

pic8
The exhibition (left is a Lockheed-Martin satellite)

pic7
3rd year Reading meteorology undergraduates at the poster session.

Overall, I found the conference very motivating since it reinforced the sense that I have a fantastic opportunity to contribute to an exciting and important area of science. Next year’s annual meeting is the hundredth and will be held in Boston.

Night at the Museum!

On Friday November 30th, Prof. Paul Williams and I ran a ‘pop-up science’ station at the Natural History Museum’s “Lates” event (these are held on the last Friday of each month; the museum is open for all until 10pm, with additional events and activities). Our station was entitled “Turbulence Ahead”, and focused on communicating research under two themes:

  1.  Improving the predictability of clear-air turbulence (CAT) for aviation
  2.  The impact of climate change on aviation, particularly in terms of increasing CAT

There were several other stations, all run by NERC-funded researchers. Our stall went ‘live’ at 6 PM, and from that point on we were speaking almost constantly for the next 3.5 hours – with hundreds (not an exaggeration!) of people coming to our stall to find out more. Neither of us were able to take much of a break, and I’ve never had quite such a sore voice!

IMG_1769
Turbulence ahead? Not on this Friday evening!

Our discussions covered:

  • What is clear-air turbulence (CAT) and why is it hazardous to aviation?
  • How do we predict CAT? How has Paul’s work improved this?
  • How is CAT predicted to change in the future? Why?
  • What other ways does climate change affect aviation?

Those who came to our stall asked some very intelligent questions, and neither of us encountered a ‘climate denier’ – since we were speaking about a very applied impact of climate change, this was heartening. This impact of climate change is not often considered – it’s not as obvious as heatwaves or melting ice, but is a very real threat as shown in recent studies (e.g. Storer et al. 2017). It was a challenge to explain some of these concepts to the general public – some had heard of the jet stream, others had not, whilst some were physicists… and even the director of the British Geological Survey, John Ludden, turned up! It was interesting to hear from so many people who were self-titled “nervous flyers” and deeply concerned about the future potential for more unpleasant journeys.

I found the evening very rewarding; it was interesting to gauge a perspective of how the public perceive a scientist and their work, and it was amazing to see so many curious minds wanting to find out more about subjects with which they are not so familiar.

My involvement with this event stems from my MMet dissertation work with Paul and Tom Frame looking at the North Atlantic jet stream. Changes in the jet stream have large impacts on transatlantic flights (Williams 2016) and the frequency and intensity of CAT. Meanwhile, Paul was a finalist for the 2018 NERC Impact Awards in the Societal Impact category for his work on improving turbulence forecasts – he finished as runner-up in the ceremony which was held on Monday December 3rd.

So, yes, there may indeed be turbulent times ahead – but this Friday evening certainly went smoothly!

Email: s.h.lee@pgr.reading.ac.uk

Twitter: @SimonLeeWx

References

Storer, L. N., P. D. Williams, and M. M. Joshi, 2017: Global Response of Clear-Air Turbulence to Climate Change. Geophys. Res. Lett., 44, 9979-9984, https://doi.org/10.1002/2017GL074618

Williams, P. D., 2016: Transatlantic flight times and climate change. Environ. Res. Lett., 11, 024008, https://doi.org/10.1088/1748-9326/11/2/024008.