From April 2nd-5th I attended the workshop on Predictability, dynamics and applications research using the TIGGE and S2S ensembles at ECMWF in Reading. TIGGE (The International Grand Global Ensemble, formerly THORPEX International Grand Global Ensemble) and S2S (Sub-seasonal-to-Seasonal) are datasets hosted at primarily at ECMWF as part of initiatives by the World Weather Research Programme (WWRP) and the World Climate Research Programme (WCRP). TIGGE has been running since 2006 and stores operational medium-range forecasts (up to 16 days) from 10 global weather centres, whilst S2S has been operational since 2015 and houses extended-range (up to 60 days) forecasts from 11 different global weather centres (e.g. ECMWF, NCEP, UKMO, Meteo-France, CMA…etc.). The benefit of these centralised datasets is their common format, which enables straightforward data requests and multi-model analysis with minimal data manipulation allowing scientists to focus on doing science!
Attendees of the workshop came from around the world (not just Europe) although there was a particularly sizeable cohort from Reading Meteorology and NCAS.
In my PhD so far, I have been making extensive use of the S2S database – looking at both operational and re-forecast datasets to assess stratospheric predictability and biases – and it was rewarding to attend the workshop and see what a diverse range of applications the datasets have across the world. From the oceans to the stratosphere, tropics to poles, predictability mathematics to farmers and energy markets, it was immediately very clear that TIGGE and S2S are wonderfully useful tools for both the research and applications communities. A particular aim of the workshop was to discuss “user-oriented variables” – derived variables from model output which represent the meteorological conditions to which a user is sensitive (such as wind speed at a specific height for wind power applications).
The workshop mainly consisted of 15-minute conference-style talks in the main lecture theatre and poster sessions, but the final two days also featured parallel working group sessions of about 15 members each. The topics discussed in the working groups can be found here. I was part of working group 4, and we discussed dynamical processes and ensemble diagnostics. We reflected on some of the points raised by speakers over the preceding days – particular attention was given to diagnostics needed to understand dynamical effects of model biases (such as their influence on Rossby wave propagation and weather-regime transition) alongside what other variables researchers needed to make full use of the potentials S2S and TIGGE offer (I don’t think I could say “more levels in the stratosphere!” loudly enough – TIGGE does not go above 50 hPa, which is not useful when studying stratospheric warming events defined at 10 hPa).
Data analysis tools are also becoming increasingly important in atmospheric science. Several useful and perhaps less well-known tools were presented at the workshop – Mio Matsueda’s TIGGE and S2S museum websites provide a wide variety of pre-prepared plots of variables like the NAO and MJO which are excellent for exploratory data analysis without needing many gigabytes of data downloads. Figure 2 shows an example of NAO forecasts from S2S data – the systematic negative NAO bias at longer lead-times was frequently discussed during the workshop, whilst the inability to capture the transition to a positive NAO regime beginning around February 10th is worth further analysis. In addition to these, IRI’s Data Library has powerful abilities to manipulate, analyse, plot, and download data from various sources including S2S with server-side computation.
It’s inspiring and motivating to be part of the sub-seasonal forecast research community and I’m excited to present some of my work in the near future!
Some PhD projects are co-organised by an industrial CASE partner which provides supervisory support and additional funding. As part of my CASE partnership with the UK Met Office, in January I had the opportunity to spend 5 weeks at the Exeter HQ, which proved to be a fruitful experience. As three out of my four supervisors are based there, it was certainly a convenient set-up to seek their expertise on certain aspects of my PhD project!
One part of my project aims to understand how certain neighbourhood-based verification methods can affect the level of surface air quality forecast accuracy. Routine verification of a forecast model against observations is necessary to provide the most accurate forecast possible. Ensuring that this happens is crucial, as a good forecast may help keep the public aware of potential adverse health risks resulting from elevated pollutant concentrations.
The project deals with two sides of one coin: evaluating forecasts of regional surface pollutant concentrations; and evaluating those of meteorological fields such as wind speed, precipitation, relative humidity or temperature. All of the above have unique characteristics: they vary in resolution, spatial scale, homogeneity, randomness… The behaviour of the weather and pollutant variables is also tricky to compare against one another because the locations of their numerous measurement sites nearly never coincide, whereas the forecast encompasses the entirety of the domain space. This is kind of the crux of this part of my PhD: how can we use these irregularly located measurements to our advantage in verifying the skill of the forecast in the most useful way? And – zooming out still – can we determine the extent to which the surface air pollution forecast is dependent on some of those aforementioned weather variables? And can this knowledge (once acquired!) be used to further improve the pollution forecast?
While at the Met Office, I began my research specifically into methods which analyse the forecast skill when a model “neighbourhood” of a particular size around a particular point-observation is evaluated. These methods are being developed as part of a toolkit for evaluation of high resolution forecasts, which can be (and usually are) more accurate than a lower resolution equivalent, but traditional metrics (e.g. root mean square error (RMSE) or mean error (ME)) often fail to demonstrate the improvement (Mittermaier, 2014). They can also fall victim to various verification errors such as the double-penalty problem. This is when an ‘event’ might have been missed at a particular time in the forecast at one gridpoint because it was actually forecast in the neighbouring grid-point one time-step out, so the RMSE counts this error both in the spatial and temporal axes. Not fair, if you ask me. So as NWP continues to increase in resolution, there is a need for robust verification methods which relax the spatial (or temporal) restriction on precise forecast-to-observation matching somewhat (Ebert, 2008).
One way to proceed forward is via a ‘neighbourhood’ approach which treats a deterministic forecast almost as an ensemble by considering all the grid-points around an observation as an individual forecast and formulating a probabilistic score. Neighbourhoods are made of varying number of model grid-points, i.e. a 3×3 or a 5×5 or even bigger. A skill score such as the ranked probability score (RPS) or Brier Score is calculated using the cumulative probability distribution across the neighbourhood of the exceedance of a sensible pollutant concentration threshold. So, for example, we can ask what proportion of a 5×5 neighbourhood around an observation has correctly forecasted an observed exceedance (i.e. ‘hit’)? What if an exceedance forecast has been made, but the observed quantity didn’t reach that magnitude (i.e. ‘false alarm’)? And how do these scores change when larger (or smaller) neighbourhoods are considered? And, if these spatial verification methods prove informative, how could they be implemented in operational air quality forecast verification? All these questions will hopefully have some answers in the near future and form a part of my PhD thesis!
Although these kind of methods have been used for meteorological variables, they haven’t yet been widely researched in the context of regional air quality forecasts. The verification framework for this is called HiRA – High Resolution Assessment, which is part of the wider verification network Model Evaluation Tools (which, considering it is being developed as a means of uniformly assessing high-resolution meteorological forecasts, has the most unhelpful acronym: MET). It is quite an exciting opportunity to be involved in the testing and evaluation of this new set of verification tools for a surface pollution forecast at a regional scale, and I am very grateful to be involved in this. Also, having the opportunity to work at the Met Office and “pretend” to be a real research scientist for a while is awesome!
Maarten Ambaum (email@example.com) Mark Prosser (firstname.lastname@example.org)
Everybody knows that the key boundary condition for a successful PhD is the provision of plenty of coffee during the day (tea, for some). Our Department has a hot water boiler with a 10 litre water tank capacity to provide an unlimited supply of hot water (it is connected to the tap to keep it topped up automatically). For historical reasons we actually call it the “urn” – I like that word so we stick to it here.
When we got a new urn recently (a “Marco Ecoboiler T10”) we were intrigued to see it had an ECO mode button, presumably promising a lower energy consumption. Indeed, when anyone in the morning saw that the urn was not in “ECO mode”, it was swiftly switched on; green credentials and all that.
One of our postdocs dug up the specs from the internet, where we learned that “ECO mode” actually makes the urn operate with 5 litres of water, which is half full. The specs suggest that when switching the urn on you then only need to heat half the amount of water. But is there more to it? Would the urn working with a half-full tank actually use less energy?
I teach atmospheric physics to our Masters and PhD students and this is precisely the kind of question I would ask them to think about. In fact I sent out an email to all members of the Department, and it turned out that there were different opinions, even amongst those who should know better, although in my view obviously only one physically correct outcome.
So, let us find out. First some theory (some basic thermodynamics), then experiment, and conclusions at the end.
One of the first things we learn in thermodynamics is conservation of energy: energy in equals energy out. The energy in is the electrical power that the urn uses, the energy out is the hot water we consume, heated up from around 15C to around 95C, as well as thermal losses, and running the internal electronics of the urn. The last bit is very marginal, just a controller and a few LEDs. We are going to ignore that. The thermal loss may well be substantial, but the water tank of the urn is actually quite well insulated with Styrofoam, so who knows.
Given that we drink the same amount of coffee, whether the urn is in ECO mode or not, the energy cost for producing the hot water does not depend on whether we run at half tank capacity or full tank capacity. We still need to heat up the same amount of water for our coffee consumption.
What is left is the energy loss. But the energy loss is proportional to the temperature difference between the inside of the tank and the outside. The inside of the tank remains close to 95C all the time, so it looks like the energy loss also cannot depend on whether we are in ECO mode or not.
Energy in equals energy out, energy out remains the same, so energy in should remain the same, ECO mode or not.
Did we miss something? Surely, a feature that is advertised as ECO mode should consume less energy?
We should give the manufacturer some credit. They claim: “This mode saves energy by minimising the energy wasted during machine down-time. The ECO mode is most effective in installations where the machine has a regular ‘off’ period.” Perhaps; perhaps not.
Unfortunately they also claim: “During the ‘off’ period as there is less water in the tank there will be less energy lost to the surrounding environment resulting in an energy saving.” This latter claim is a tricky one: Energy loss is proportional to the temperature difference between the tank and the exterior irrespective of how much water is in the tank. As the heat capacity of the full tank is higher, it will reduce its temperature more slowly, possibly leading to a higher total energy loss, as the temperature differential is kept higher on average for a full tank. So after switching on the urn again, this increased energy loss needs to be topped up. Is that then the way ECO mode helps us being green?
We did what any scientist would do, faced with such a question: do the experiment; this is where Mark comes in. Easy enough: these days you can buy power adaptors that plug in the wall socket and accumulate the total amount of electrical energy used over some period.
We did four experiments: two midweek ones running for three consecutive 24 hour periods from Tuesday to Thursday, two weekend ones running from 6pm on Friday to 9am on Monday. In half of the experiments we left the ECO mode button on, and in the other half, the ECO mode button was left switched off.
Straight to the results:
Midweek (3 days)
Lo and behold: it does not make much difference at all and, if anything, ECO mode uses more energy!
Of course the experiment is not carefully controlled: perhaps we drank more coffee during the ECO mode periods, but both weeks were pretty similar in coffee room usage, there were no big events, and the two weekends were pretty much completely quiet. In fact the weekend usage is probably dominated by the usage before 9am on a Monday. We have cleaners that come in very early, and there are quite a few members of staff that come in before nine in the morning, and perhaps even some PhD students!
Let’s do some more analysis of the data: daily normal usage is about 7KWh per day, as in the midweek data. That means that from the 4.1KWh weekend usage less than about 1KWh (about one seventh of a normal day’s usage to account for the Monday am usage – I know it is a rough estimate) corresponds to normal usage, and the rest is energy loss when the urn is switched on but not used. I estimate the loss to be 1.7KWh per day, so that a weekend, including the Monday early rush hour, corresponds to about 3.4KWh losses and about 0.7KWh normal usage.
So, from the 7KWh daily energy usage, about 1.7KWh is thermal energy loss (and other bits and bobs, such as the lovely LEDs at the front of the urn), with an error bar, I guess, of possibly 30%. Is this a lot of energy loss? 1.7KWh per day corresponds to 70W loss, about the same as the lighting of a single-person office. Not bad. The Marco Ecoboiler is probably pretty “eco”, but not because of its ECO mode.
We are then left with 5.3KWh each day to make coffee. A coffee cup is about 200ml, and assuming the water for the coffee needs heating from 15C to 95C, each cup of coffee requires 0.2kg x 80K x 4200 J/kg/K = 67KJ of energy, or 0.019KWh. That means that 5.3KWh corresponds to about 280 cups of coffee per day. Probably quite realistic, given the size of our Department.
Should we switch off the urn overnight? Well, an overnight period (all losses, as there is no usage of the urn, perhaps for about 11 hours) would use about 0.8KWh. But, of course, the tank will have cooled down, perhaps to 30C, and needs reheating to 95C. This costs for a 10 litre tank about 0.8KWh. Funny that is: probably better to just leave the tank on overnight to prevent people from using highly inefficient kitchen kettles, and prevent people from having to wait for the urn to heat up in the morning.
Actually, this is not as much coincidence as it may seem: the thermal loss during the night switch off period must of course equal the loss in thermal energy of the water, which then needs to be replenished when we reheat the water back to 95C.
As I said before, the full tank could well lose more energy as it keeps relatively warmer during the cooling off period compared to the half full tank of ECO mode. But a quick calculation, assuming a well-insulated tank, shows that the temperature reduction is proportional to (T0-Te) / k with T0 the initial tank temperature (95C), Te the external temperature, and k the heat capacity of the tank. So, indeed, a full tank, with larger k, has a smaller temperature reduction with time, and remains warmer on average. But the energy cost of this reduction of course equals the heat capacity k times the change in temperature: k x (T0-Te) / k = (T0-Te), so we get an energy loss proportional to (T0-Te), but independent of the heat capacity k of the tank. It looks like the engineers at the manufacturers overlooked some basic physics.
By the way: how long would it take to reheat the tank in the morning if it had cooled down to 30C overnight? Well, at full pelt the urn uses 2.8KW, so a required energy of 0.8KWh takes about 15 minutes to produce. Pretty long wait. Probably not worth the frustration.
So, to conclude: our Ecoboiler is quite “eco”: it wastes only about 70W in thermal losses, not so bad for a Department that uses big computing resources (not so “eco”).
The thermal energy losses from the urn are pretty modest in the grand scheme of things, and it turns out to be better to just leave the urn on overnight, as the cost of reheating the cold urn in the morning is nearly the same as the energy cost of leaving it on. Leaving the urn on over the weekend is probably also better than switching off, because the occasional weekend user will end up using a highly inefficient kitchen kettle.
The “ECO mode” button makes the urn operate at half tank capacity, but the thermodynamical arguments as well as the measurements show that it actually uses at least the same amount of energy in ECO mode. In fact, at half capacity the tank has more steam in it, and the steam is possibly slightly hotter, on average, than the liquid, and thus more energy may be lost through conduction. Just leave the ECO mode button switched off; it doesn’t do any good.
**Scroll to the bottom for picture of a bearded dragon.**
A full-time PhD is not always what you see yourself doing. Perhaps you don’t like the idea of being an academic, going through the realities of post-doc life, and battling for the few research roles out there. Maybe you want to get a job in industry, but keep your hand in the research pool. Maybe you have other commitments, meaning that your time is limited but you want to still learn and build your research skills. Whatever the reason, there is always an option to go part-time.
After doing a year and a bit full-time, I knew I wanted to work outside of academia in something more practical than an office-based PhD. Wanting to make use of the work I’d already started, myself, my supervisors and my funders agreed that a part-time MPhil gave the outcomes that all parties wanted. It means I can finish my studies sooner and have something tangible for the years of study, but it also provides new research into my topic that can be used by subsequent researchers.
But how to broach the subject in the first place? You need to take a bit of time to look at the reasons why you want to change, but not so long that you end up regretting never actually saying how you’re feeling at least. It’s really important at this stage that you assess your options, and think about the practicalities, like how it will affect your funding.
It is important to work out how your new schedule will fit together. Part-time doesn’t mean a few hours a week, it means half of what a full-time PhD student would do. With my hours, it means I do 12 hours a week and then work during school holidays. Realistically I won’t get much time off, but it is workable into a roughly 8-6 schedule. It’s important to keep your weekends as free as possible, because social time will help keen you sane!
And in terms of touching base with your supervisor, for me that means coming in once a fortnight, and keeping a record of everything I’ve been up to each day, so I know exactly where I am on my project objectives. You and your supervisor need to be realistic about how much you can complete in a given time, and that your work won’t happen as quickly, so regulating expectations is important. And if things aren’t working, then it’s important to look at them again, perhaps with the help of your Monitoring Committee, to keep you on top of your work.
It’s also important to learn to say no – anyone who knows me knows I struggle with this! People might be under the impression that you have more time to take on other stuff now that you’re part-time, but you have to know what you can make time for in your schedule (like writing a short blog), what might bring other benefits (little bit of open day volunteering), and what really isn’t your problem to worry about!
Having gone part-time, a lot of the stresses seem to have relaxed; it’s nice to not feel like the PhD is all-consuming, and I’m finding it easier to manage my targets each fortnight. If anything, knowing I only have a limited window for work seems to increase productivity! And my job as a lab technician now means I’m gaining a whole other range of skills, can leave that work at work, and make friends with a whole host of school reptiles!
Having been a PhD student for a little over 3 months I am perhaps ill-qualified to write such a ‘PhD tips’ type of blog post, but write one I appear to be doing! It’s probably actually more accurately titled ‘study tips in general but ones which are highly relevant to science PhDs.’
The following are just my tips on what have helped me over the course of my studies and may be obvious or not suitable for others, but I write them on the off-chance that something here is useful to someone out there. No doubt I will have many more such strategies by the end of my time here in Reading!
Papers and articles As a science student you may have encountered these from time to time. The better ones are clearly written and succinct, the worse ones are verbose and obscurantist. If you’re not the quickest reader in the world, getting through papers can end up consuming a great deal of your time.
I’m going to advocate speed reading in a bit but when you start learning speed reading, one of the things they ask you to think about first is “Do I really need to read this?”. If the answer is yes, then the next question is “Do I really need to read all of it?”. Perhaps you only need to glance at just the abstract, figures and conclusion? After all, time spent reading this is time not spent doing something else, something more profitable perhaps, so do check that it really is worth your time before diving in.
So once I’ve ascertained that the article is indeed worth my time, I sit down with a pencil (or the equivalent for a PDF) and read through the sections I’ve decided on. Anything that makes my neurons spike (“oh that’s interesting….”), I underline or highlight. Any thoughts or questions that occur to me, I write in the margin. If I feel the need to criticise the paper for being insufficiently clear then I write down these remarks, too.
Once I get to the end, I put the article away out of sight and sit down with a blank piece of paper (or on a computer) and try and write something very informally about what I’ve just read. Quite often my mind will go helpfully blank at this point, so I try and finish the following sentence: “The biggest thing (if anything) I learned from this article was….”. Completing this one sentence then tends to lead to other stuff tumbling out and in no particular order I jot these all down. Only once the majority of it is down on paper do I take a peek at the annotated piece to see what I missed (For heaven’s sake avoid painting the article yellow with a highlighter!)
This personal blurb that you have produced is then a good way to quickly remind yourself of the contents of that article in the future without having to reread it from scratch. This post-reading exercise need not take more than 15 minutes but if you’re worried about spending this extra time, don’t be. You’ll save yourself a heap of time in future by not having to reread the damn thing.
Random piece of advice – if you are unaware of the Encyclopedia of Atmospheric Sciences, then check it out. Whatever your PhD topic I guarantee there’ll be 10 or so shortish entries which are all highly relevant to your particular PhD topic and consequently worth knowing about!
Speed reading Really still on the previous paragraph but as is often the way, between the valuable articles that you really should be reading and the stuff for which life’s really too short there’s a grey area. For such grey areas I am an advocate of speed reading. For any electronic texts check out this free website:
The pace the words flash up doesn’t have to be particularly fast (I suggest trying 300 wpm to start with) but the golden rule is to never press pause once you’ve started. No going back to read stuff you’ve missed (well not until you’ve reached the end first at least!). This method of reading is especially useful for any articles that feel like quagmires into which you are slowly drowning. Paradoxically reading faster in such instances often increases one’s comprehension.
A good way to develop the skill of speed reading is to start on articles you see posted on social media, articles that you are not too fussed about getting every single detail. Just let it wash over you!
Talks and lectures I have found it useful to make audio recordings of these. I don’t usually tend to listen back, but if there is something that was particularly interesting or dense that might be worth revisiting then it can be very worthwhile. I make a note of the time this something was said at the time it was said and can thus track it down in the recording fairly painlessly afterwards.
One tip about note taking that has stayed with me since I first heard it several years back was the following: after writing down the title, only make notes on what is surprising or interesting to you, just that! This may result in many lines of notes or no lines at all, but whatever you do, don’t just make notes of everything that was said. This advice has been very useful for me.
Organising Ask me in person if you would like to know my thoughts on this.
Programming to help physical intuition. This is probably more relevant to students like me who didn’t come from a maths or physics undergrad and consequently aren’t quite as fluent in the old maths….or perhaps undergrads for that matter… ….but in my undergrad (environmental science) I spent quite a lot of the time spent studying maths (and to a lesser extent) physics involved memorising complicated procedures. The best example of this was a lecture on Fourier Series where the professor took the whole hour to work through the process of getting from an input (x^2) to the output (first n terms of the Fourier series). Because it took so much space/effort for me to remember this lengthy process, it ended up crowding out the arguably more important conceptual stuff, such as what a Fourier series actually does and why it is it so useful. When all is said and done and the final exam is handed in, these concepts are what should (ideally) stick with you even if the details of how, don’t. So here’s where I think programming can come in. Firstly, there’s nothing like coding up some process to check whether you understand the nuts and bolts of it, but more importantly once it has been coded up properly you can then play about with the inputs to see how these affect the graphed outputs. Being able to ‘play’ about like this gives you a more intuitive feel for the model/process that wouldn’t be possible if you had to manually redo the laborious calculations each time you wanted to change the input parameters. 3 examples of where I have done this myself are the following: 1. Getting my head around the thermal inertia of the oceans by varying the depth of the surface and deep layer of the ocean in a simple model. 2. Playing around graphically with dispersion. 3. Convincing myself that it really is true that in the middle of the Northern Hemisphere summer the north pole receives more energy per day than the equator.
And you? So do you have any hard won study/research tips? If so do email me as I would be interested in hearing about them! Which study hack do you think I (or others) are most lacking?
Between 6th-10th January 2019 I was fortunate enough to attend the 99th American Meteorological Society (AMS) Annual Meeting in their centennial year. It was hosted in the Phoenix, Arizona Convention Center – its vast size was a necessity, seeing as there were 2300 oral presentations and 1100 poster presentations given in 460 sessions! The conferences and symposia covered a wide range of topics such as space weather, hydrology, atmospheric chemistry, climate, meteorological observations and instrumentation, tropical cyclones, monsoons and mesoscale meteorology.
The theme of this year’s meeting was “Understanding and Building Resilience to Extreme Events by Being Interdisciplinary, International, and Inclusive”. The cost of extreme events has been shown by reinsurance companies to have increased monotonically, with estimated costs for 2017 of $306 billion and 350 lives in the US. Marcia McNutt, President of the National Academy of Science (NAS), gave a town hall talk on the continued importance of evidence-based science in society (view recording). She says that NAS must become more agile at giving advice since the timescales of, for example, hurricanes and poor air quality episodes are very short, but the problems are very complex. There is reason for optimism though, as the new director of the White House Office of Science and Technology Policy is Kelvin Droegemeier, a meteorologist who formerly served as Vice President for Research at the University of Oklahoma.
“Building Resilience to Extreme Events” took on another meaning with the federal shutdown and proved to be the main talking point of this year’s annual meeting. Over 500 people from federally funded organisations such as NOAA could not attend. David Goldston, director of the MIT Washington Office, gave a talk at the presidential forum entitled “Building Resilience to Extreme Political Weather: Advice for Unpredictable Times” (view recording). He made the analogy of both current US political attitude towards climate change and the federal shutdown as being ‘weather’, and thought that politics would return to long-term ‘climate’. He advised scientists to present their facts in a way understandable to public and government, prepare policy proposals, and be clear on why they are not biased. He reassured scientists by saying they have outstanding public support with 76% of the public thinking scientists act in their best interest. During the talk questions were sourced from the audience and could be voted on. The frustration of US scientists with the government was evidently large.
Questions put forward by the audience and associated votes during Goldston’s talk.
A growing area of research is artificial and computational intelligence which had its own dedicated conference. As an early career researcher in urban and boundary layer meteorology I was interested to see a talk on “Surface Layer Flux Machine Learning Parametrisations”. By obtaining training data from observational towers it may be possible to improve upon Monin-Obukhov similarity theory in heterogeneous conditions. At the atmospheric chemistry and aerosol keynote talk by Zhanqing Li I learnt that anthropogenic emissions of aerosol can cause a feedback leading to elevated concentration of pollutants. Aerosol reduces solar radiation reaching the surface leading to less turbulence and therefore lower boundary layer height. It also causes warming at the top of the boundary layer creating a stronger capping inversion which inhibits ventilation. Anthropogenic aerosols are not just important for air quality. They affect global warming via their influence on the radiation budget and can lead to more extreme weather through enhancing deep convection.
I particularly enjoyed the poster sessions since they enabled networking with many scientists working in my area. On the first day I bumped into several Reading meteorology undergraduates on their year long exchange at the University of Oklahoma. Like me, I think they were amazed by the scale of the conference and the number of opportunities available as a meteorologist. The exhibition had over 100 organisations showcasing a wide range of products, publications and services. Anemoment (producers of lightweight, compact 3D ultrasonic anemometers) and the University of Oklahoma had stalls showing how instruments attached to drones can be used to profile the boundary layer. This has numerous possible applications such as air quality monitoring and analysing boundary layer dynamics.
Overall, I found the conference very motivating since it reinforced the sense that I have a fantastic opportunity to contribute to an exciting and important area of science. Next year’s annual meeting is the hundredth and will be held in Boston.
On Friday November 30th, Prof. Paul Williams and I ran a ‘pop-up science’ station at the Natural History Museum’s “Lates” event (these are held on the last Friday of each month; the museum is open for all until 10pm, with additional events and activities). Our station was entitled “Turbulence Ahead”, and focused on communicating research under two themes:
Improving the predictability of clear-air turbulence (CAT) for aviation
The impact of climate change on aviation, particularly in terms of increasing CAT
There were several other stations, all run by NERC-funded researchers. Our stall went ‘live’ at 6 PM, and from that point on we were speaking almost constantly for the next 3.5 hours – with hundreds (not an exaggeration!) of people coming to our stall to find out more. Neither of us were able to take much of a break, and I’ve never had quite such a sore voice!
Our discussions covered:
What is clear-air turbulence (CAT) and why is it hazardous to aviation?
How do we predict CAT? How has Paul’s work improved this?
How is CAT predicted to change in the future? Why?
What other ways does climate change affect aviation?
Those who came to our stall asked some very intelligent questions, and neither of us encountered a ‘climate denier’ – since we were speaking about a very applied impact of climate change, this was heartening. This impact of climate change is not often considered – it’s not as obvious as heatwaves or melting ice, but is a very real threat as shown in recent studies (e.g. Storer et al. 2017). It was a challenge to explain some of these concepts to the general public – some had heard of the jet stream, others had not, whilst some were physicists… and even the director of the British Geological Survey, John Ludden, turned up! It was interesting to hear from so many people who were self-titled “nervous flyers” and deeply concerned about the future potential for more unpleasant journeys.
I found the evening very rewarding; it was interesting to gauge a perspective of how the public perceive a scientist and their work, and it was amazing to see so many curious minds wanting to find out more about subjects with which they are not so familiar.
My involvement with this event stems from my MMet dissertation work with Paul and Tom Frame looking at the North Atlantic jet stream. Changes in the jet stream have large impacts on transatlantic flights (Williams 2016) and the frequency and intensity of CAT. Meanwhile, Paul was a finalist for the 2018 NERC Impact Awards in the Societal Impact category for his work on improving turbulence forecasts – he finished as runner-up in the ceremony which was held on Monday December 3rd.
I am very excited to have won a prize of £5,000 in the #NERCImpact Awards, for making flights smoother and safer through our turbulence research.
Congratulations to all the finalists — what a fantastic array of NERC-funded projects making a real difference to people’s lives! pic.twitter.com/N1guMSys3k