2021 Academic Visiting Scientist – Tim Woolings 

Isabel Smith – i.h.smith@pgr.reading.ac.uk

Every year, the Met PhD students at the University of Reading invite a scientist from a different university to learn from and talk to about their own project. This year we had the renowned Professor Tim Woolings, who currently researches and teaches at the University of Oxford. Tim’s interests generally revolve around large scale atmospheric dynamics and understanding the impacts of climate change on such features. We, as Met PhD students, were very excited and extremely thankful that Tim donated a week of his time (4th-8th of October) and travelled from Oxford for hybrid events within the Met. building. Tim told us of his own excitement to be back visiting Reading, after completing his PhD here, on isentropic modelling of the atmosphere, and staying on as a researcher and part of the department until 2013.  

The week started with Tim presenting “Jet Stream Trends” at the Dynamical Research Group, known as Hoskin’s Half Hour. A large number of PhD students, post-doctorates and supervisors attended, which was to be expected considering Tim has a book dedicated on Jet streams. After a quick turnaround, he spoke at the departmental lunch time seminar on “The role of Rossby waves in polar weather and climate”. Here, Tim did an initial review on Rossby wave theory and then talked about his current fascinating research on the relevance of them within the polar atmosphere. The rest of Tim’s Monday consisted of lunch at park house with Robert Lee and the organising committee, Charlie Suitters, Hannah Croad and Isabel Smith (within picture). Later that evening Tim visited the Three Tuns pub with other staff members, for an important staff meeting! The PhD networking social with Tim on Thursday was a great evening where 15 to20 students were able to discuss Tim’s research in a less formal setting within Park House pub.  

Tim Woolings (2nd left) and the visiting scientist organising committee

Tim’s Tuesday, Wednesday (morning) and Thursday consisted of virtual and in-person one on one 15-minute meetings with PhD students. Here students explained their research projects and Tim gave them a refreshing outsider perceptive. On Wednesday afternoon, after Tim attended the High-Resolution Climate Modelling research group, he talked about his career in PhD group (A research group for PhD students only, where PhD students present to each other.). Tim explained how his PhD did not work as well as he had initially hoped, and the entire room felt a great weight of relief. His advice on keeping calm and looking for the bigger picture was heard by us all.  

On Friday the 8th, a mini conference was put on and six students got to the “virtual” and literal stage and presented their current findings. Topics ranged from changes to Arctic cyclones, blocking, radar and Atmospheric dust. The conference and the week itself were both great successes, with PhD students leaving with inspiring questions to help aid their current studies. All at the University of Reading Department of Meteorology were extremely grateful and we thoroughly enjoyed having Tim here. We wish him all the best in his future endeavours and hope he comes back soon! 

COP Climate Action Studio 2021 and a visit to the Green Zone, Glasgow  

Helen Hooker h.hooker@pgr.reading.ac.uk 

Introduction 

SCENARIO DTP and the Walker Academy offered PhD students the opportunity to take part in the annual COP Climate Action Studio (COPCAS) 2021. COPCAS began with workshops on the background of COP, communication and interviewing skills and an understanding of the COP26 themes and the (massive!) schedule. James Fallon and Kerry Smith were ‘on the ground’ in the Blue Zone, Glasgow in week 1 of COP26, followed by Gwyn Matthews and Jo Herschan during week 2. Interviews were arranged between COP26 observers, and COPCAS participants back in Reading who were following COP26 events in small groups through livestream. Students summarised the varied and interesting findings by writing blog posts and engaging with social media.

Figure 1: COPCAS in action.   

Motivation, training and week 1 

Personally, I wanted to learn more about the COP process and to understand climate policy implementation and action (or lack thereof). I was also interested to learn more about anticipatory action and forecast based financing, which relate to my research. After spending 18 months working remotely in my kitchen, I wanted to meet other students and improve formulating and asking questions! I found the initial training reassuring in many ways, especially finding out that so many people have dedicated themselves to drive change and find solutions. During the first week of COP26 we heard about so many positive efforts to combat the climate crisis from personal actions to community schemes, and even country wide ambitious projects such as reforestation in Costa Rica. A momentum seemed to be building with pledges to stop deforestation and to reduce methane emissions.

Green Zone visit 

Figure 2: Green Zone visit included a weekend full of exhibitors, talks, films and panel discussions plus a giant inflatable extracting COvia bouncing!

During the middle weekend of COP26, some of us visited the Green Zone in Glasgow. This was a mini version of the Blue Zone open to the public and offered a wide variety of talks and panel discussions. Stand out moments for me: a photograph of indigenous children wearing bamboo raincoats, measuring the length of Judy Dench’s tree, the emotive youth speakers from Act4Food Act4Change and the climate research documentary Arctic Drift where hundreds of scientists onboard a ship carried out research whilst locked into the polar winter ice-flow.  

COPCAS Blog 

During COPCAS I wrote blogs about: a Green Zone event from Space4climate, an interview by Kerry Smith with SEAChange (a community-based project in Aberdeenshire aiming to decarbonise old stone buildings) and Sports for climate action. I also carried out an interview arranged by Jo with WWF on a food systems approach to tackling climate change.

Ultimately though, the elephant in the large COP26 Blue Zone room had been there all along…

Interview with Anne Olhoff, Emissions Gap Report (EGR) 2021 Chief scientific editor and Head of Strategy, Climate Planning and Policy, UNEP DTU Partnership.

Figure 3: Source: UNEP Emissions Gap Report 2021 updated midway through week two of COP26 accounting for new pledges. 

Time is running out, midway through the second week of COP26, the United Nations Environmental Partnership (UNEP) presented its assessment on the change to global temperature projections based on the updated pledges so far agreed in Glasgow.  

Pledges made prior to COP26 via Nationally Determined Contributions (NDCs) put the world on track to reach a temperature increase of 2.7C by the end of the century. To keep the Paris Agreement of keeping warming below 1.5C this century, global greenhouse gas emissions must be reduced by 55% in the next eight years. At this point in COP26, updated pledges now account for just an 8% reduction – this is 7 times too small to keep to 1.5C and 4 times too small to keep to 2C. Updated projections based on COP26 so far now estimate a temperature rise of 2.4C by 2100. Net-zero pledges could reduce this by a further 0.5C, however plans are sketchy and not included in NDCs. So far just five of the G20 countries are on a pathway to net-zero.

Anne’s response regarding policy implementation in law: 

“Countries pledge targets for example for 2030 under the UN framework for climate change and there’s no international law to enforce them, at least not yet. Some countries have put net-zero policies into law, which has a much bigger impact as the government can be held accountable for the implementation of their pledges.” 

Following my own shock at the size of the emissions gap, I asked Anne if she feels there has been any positive changes in recent years: 

“I do think we have seen a lot of change, actually…the thing is, things are not moving as fast as they should. We have seen change in terms of the commitment of countries and the policy development and development in new technology needed to achieve the goals, these are all positive developments and here now, changing the whole narrative, just 2 years ago no one would have thought we’d have 70 countries setting net-zero emission targets…we are also seeing greater divergence between countries, between those making the effort to assist the green transition such as the UK, EU and others, and those further behind the curve such as China, Brazil and India. It’s important to help these countries transition very soon, peaking emissions and rapidly declining after that.”   

I asked Anne how countries on track can support others: 

“A lot of the great things here (at COP) is to strengthen that international collaboration and sharing of experiences, it’s an important function of the COP meeting, but we need to have the political will and leadership in the countries to drive this forward.” 

Summary 

The momentum that was apparent during the first week of COP26 seemed to have stalled with this update. Despite the monumental effort of so many scientists, NGOs, individuals and those seeking solutions from every conceivable angle, the pledges made on fossil fuel reduction are still so far from what is needed. And at the final hour (plus a day), the ambition to ‘phaseout’ burning coal was changed to ‘phasedown’ and the financial contributions from developed nations pledged to cover loss and damage to countries not responsible for, but impacted now by climate change, have not been realised. I think this is the first time I have really felt the true meaning of ‘climate justice’. Perhaps we do need a planet law, as it seems our political leaders, do not have the will.

Overall, the COPCAS experience has been enjoyable, slightly overwhelming and emotional! It has been great to work together and to share the experiences of those in the Blue zone. It was also an amazing learning experience; I think I have barely touched the surface of the entire COP process and I would still like to understand more.

Climate Science and Power 

Gabriel M P Perez – g.martinspalmaperez@pgr.reading.ac.uk 

Introduction  

Climate science, especially climate-change science, is increasingly becoming a source of power in society and integrating politics. As an academic in meteorology, I started realising how possibly other scientists and I rarely think about how our profession fits in the power networks that constitute politics; on the contrary, it seems that we often think about our scientific outputs as something detached from the wheels of history.  

In this essay, I paint a picture of how climate science relates to the main sources of power in the civic sphere by building upon some of my recent readings in history of ideas and philosophy. I also discuss a few past and recent instances where alleged “apolitical” scientific discourse was moulded to support politics of domination and exclusion. By better understanding the relationships of power surrounding our science, we can be more confident that our scientific outputs will contribute more positively to society at large.  

The participation of climate-change science in politics is not exactly new: it has existed for at least three or four decades. However, up until the early 2010s, the hypothesis of anthropogenic global warming still faced a few challenges in the scientific realm. Perhaps the last of these challenges was the alleged 1998-2014 warming hiatus – climate scientists had to answer to the public and come to an agreement as to why the increase in global temperatures appeared to halt. Now, in the second decade of the 21st century, anthropogenic warming is a consensus and the most relevant contenders of climate science are found in the political realm. 

Historical background

In the seminal text “A discourse on the method of correctly conducting one’s reason and seeking truth in sciences”, Descartes proposes a method of pure reason to conduct scientific research. He proposes that the inquisitive individual should start by forgetting everything he learned and start deriving basic truths from simple logical statements and build up from that to provide scientific answers to more complex questions using “pure reason”. This text was one of the starting points of a scientific revolution and helped build the bases of modern science. Although epistemology quickly moved on from the Cartesian thought, it still affects the way we do science. After years in academia, trained scientists grow to believe that their scientific outputs are disconnected from the other spheres of society and the networks of power. This “apolitical” mindset will then affect, for example, our ideas, hypotheses and communications regarding climate change.  

The historian of ideas Michel Foucault in his book “The Order of Things”, deconstructs the idea that the scientific discourse is independent of the surrounding socio-economic environment. He argues that the scientific discourse is inherently tied to the “lenses” by which scientists of a certain time are capable of analysing the physical world. Foucault calls these lenses “epistemes”. The epistemes are the ways of thinking in each stage of history that define what is acceptable scientific discourse. Let us take Descartes’ Discourse as an example of that: the author lived in a highly religious time, and, although in Parts 1 to 3 of the Discourse he describes his method of “pure reason”, in Part 4 he employs his method to argue for the existence of God: this would not be acceptable scientific discourse in the 21st century. Therefore, even the brightest minds are subject to having their scientific discourse shaped by the epistemes of their time. 

For some scientists, accepting that our science is shaped by factors outside of the realm of pure reason may be uncomfortable. However, embracing our episteme and the historical forces that drive scientific paradigm shifts may aid us in producing and communicating science in ways that are more likely to impact society positively; this could also help prevent distortions and misuses of the power stemming from our science.

For example, the scientific consensus has been distorted in the past to provide an intellectual background to the darkest side of environmentalism: “ecofascism”, a political model that, in the early 20th century, used environmentalism to justify white supremacy and genocide of indigenous peoples (see the New Yorker article “Environmentalism’s Racist History” by Jedediah Purdy). Sadly, such distortions of environmental sciences are not buried in the past, on the opposite, they are gaining popularity in certain extremist groups (Lawton, 2019; Taylor, 2019) and even influencing today’s politics: the Portuguese ecofascist party was an important early supporter of the current Brazilian president Jair Bolsonaro, whose policies are ironically accelerating the deforestation of the Amazon rainforest (Pereira, 2020). In United States politics, we have recently seen in the media the eco-fascist “shaman” invading the White House after Donal Trump’s defeat. 

The power relations of climate science 

Power can be defined as the ability to have others do as you would. Evoking Foucault one more time, there are two kinds of power: the repressive and normalising power. Repressive power is a second-rate type of power that requires the use of force to control the actions of others.  Normalising power, on the other hand, is silent, non-aggressive, and much more effective than repressive power. Normalising power controls what other people want. If you have normalising power over others, they will do as you would because you have succeeded in making them want the same as you. As climate researchers, our scientific output is a growing source of normalising power. More and more people and governments want to do what climate science says is better for life on Earth. Therefore, “power” hereafter refers to “normalising power”. 

Power is present in all spheres of human relations (e.g., family, workplace and institutions). Here we will discuss power in the civic sphere, i.e., the power of having groups of people or societies do as you would. The civics educator Eric Liu suggests that power emanates from six sources. Here, I list four of these sources that I deem most relevant to climate science and discuss some ways that they relate to it: 

  1. Ideas Ideas, hypotheses and theories about the physics, impacts, mitigation and adaptation of climate change emanate almost exclusively from academia.  This directly places climate scientists from top institutions as raw sources of power that shape how people and governments think, behave and act towards climate change.  Combining David Hume’s proposition that ideas come from the impressions one has had throughout their lives and Foucault’s theory of epistemes, we may come to the conclusion that this source of power (i.e., scientific ideas) might not be as purely rational as one might have hoped. 
  2. Wealth Since ideas stem from academia, it is important to remember that most top institutions are in the wealthy nations of the Global North. Moreover, most scientists in these institutions were born and raised in the same wealthy nations. Naturally, the bulk of scientific outputs, both in terms of results and communications, are tied to the episteme, or “ways of thinking”, of this particular set of scientists.    Wealth is also related to science through the sources of research funding. Decisions regarding the allocation of research funds are often made by boards composed by either: 
  • Scientists in wealthy countries or 
  • Influential individuals outside academia 

A controversial example of B is the influence that Bill and Melinda Gates, through their foundation, exert over the World Health Organisation, having a disproportionate influence on scientific and public health decisions (Wadman, 2007). The issue is that Bill and Melinda’s suggestions are often not aligned with the public’s best interest or the scientific consensus, but rather with the personal motivations of these individuals. The power of wealth in climate-related negotiations is further evidenced when we notice that ideas such as climate debt (Warlenius, 2018) are typically ignored by current and former imperialist nations. These ideas were advocated by Global South agents in the widely ignored “World People’s Conference on Climate Change and the Rights of Mother Earth” held in Bolivia in 2010. 

  1. Numbers Climate activists, when numerous enough, have the power to pressure or convince governments and individuals to act according their beliefs. These beliefs are largely based on climate-change scientific literature. Scientists sometimes also take a more direct approach and practise environmental activism of some sort. 
  2. State action Governments are themselves a source of civic power but also subject to the other sources of power (i.e., ideas, wealth and numbers). Democratic states, as representatives of the people, have the power to directly and indirectly influence climate change by reducing (or increasing) emissions, funding climate research, educating the future generations, and many others. The governmental action, in its turn, is constrained by law in states under “Rechtsstaat” (or “the rule of law”). This raises the question: are lawmakers, prosecutors, judges and other agents well equipped to make decisions around climate change? In the next decades, it is not hard to imagine climate scientists being consulted regarding climate-change litigation in national or international courts. A few weeks ago, for example, the Brazilian president Jair Bolsonaro was accused of crimes against humanity at the International Criminal Court; his policies were said to be “directly connected to the negative impacts of climate change around the world”. 

Conclusion 

In this essay, I have outlined and attempted to disentangle a few existing and emerging power relations around climate change. I argue that as climate scientists we are sources of power in society. Therefore, we should be aware of our own “ways of thinking”, or “epistemes”, and remember that these are driven by external factors. Those external factors have the ability to shape our ideas, hypotheses and communications regarding climate change. Being aware of our role in the complex network of power known as politics could maximise the positive impact of the power stemming from our scientific outputs. Hopefully, this awareness could help prevent this power from being misdirected to support politics of domination and exclusion. Moreover, as the impacts of climate change are increasingly damaging to life on Earth, it is likely that in the next decade’s climate scientists involve themselves with litigation in national and international courts of justice. It is, therefore, timely for us to be aware of our roles in all levels of politics. 

References and further reading

Descartes, Rene. A discourse on the method of correctly conducting one’s reason and seeking truth in science. 1637

Foucault, Michel. The order of things. 1966.

Lawton, Graham. “The rise of real eco-fascism.” New Scientist 243.3243 (2019): 24.

Pereira, Eder Johnson de Area Leão, et al. “Brazilian policy and agribusiness damage the Amazon rainforest.” Land Use Policy 92 (2020): 104491.

Purdy, Jedediah. Environmentalism’s racist history. The New Yorker. 2015 https://www.newyorker.com/news/news-desk/environmentalisms-racist-history

Taylor, Blair. “Alt-right ecology: Ecofascism and far-right environmentalism in the United States.” The Far Right and the Environment. Routledge, 2019. 275-292.

Wadman, Meredith. “Biomedical philanthropy: state of the donation.” Nature 447.7142 (2007): 248-251.

Warlenius, Rikard. “Decolonizing the atmosphere: The climate justice movement on climate debt.” The Journal of Environment & Development 27.2 (2018): 131-155.

Eric Liu Ted Talk about civic power: https://www.youtube.com/watch?v=Cd0JH1AreDw

List of resources about climate debt: 

https://fascinated-soccer-ac0.notion.site/Climate-Debt-Resources-f7f24ce9a7344aa1a3e5290f853b6b4f

History of ecofascism: https://www.youtube.com/watch?v=FkhmP7yDWeY

NCAS Climate Modelling Summer School

Shammi Akhter – s.akhter@pgr.reading.ac.uk

The Virtual Climate Modelling Summer School covers the fundamental principles of climate modelling. The school is run for 2 weeks in the September of each year by the leading researchers from the National Centre for Atmospheric Science (NCAS) and from the Department of Meteorology at the University of Reading. I attended the school mainly because I have recently started using climate models in the second aspect of my research work and also because one of my supervisors recommended this to me.

What happened during the first week?

In the first week, lecturers introduced us to numerical methods used in climate models and we had a practical assignment implementing a chosen numerical method of our own in Python. We mostly worked individually on our projects that week. There were also lectures on convection parameterisation and statistical analysis for climate models.

What happened during the second week?

Figure 1: Earth energy budget comparison diagram between control (red) and flat earth (green) experiments produced by me in week 2.

In week two, with the assistance of NCAS and university scientists, we analysed climate model outputs. I personally was involved in the Flat Earth experiment- in which we tested the effect of changing surface elevation for terrain such as mountains, high plateaus on the climate. In this experiment, the perturbation is imposed by reducing the elevation of mountains to sea level. There were eight people in our team. As you may know, we PhD students have the occasional opportunity to do research collaboratively with other students in Reading Meteorology and encourage our teammates. For this reason, it felt very nice to me to work as the part of a research group. I was amazed by how we had been able to produce a small good piece of scientific work just within a matter of days due to our team effort. In Figure 1, I have presented a small part of our work which is the global energy budget comparison between a control experiment and the flat earth experiment (where the elevation of the mountains has been reduced to sea level). Along with our practical, we also attended some lectures on the ocean dynamics and physics, water in the climate system and land-atmosphere coupling and surface energy balance during this week.

How was it like to socialize with people virtually?

We used the Gather.town during the lunchtime and after work to socialize. I was a bit surprised though that I was the only student joining the Gather.town and as a result I always had to hang out (virtually) with NCAS and university scientists all the time. I rather consider it a blessing for me as there was no competition to introduce myself to the professionals. I even received a kind offer from one of our professors to assist him as a teaching assistant in his course in the department.

Concluding Remarks

I learnt about some of the basic concepts of climate modelling and I hope to use these things in my research someday. It was also very refreshing to talk to and work with other students as well as the scientists. While working in a group in week 2, I once again realized there are so many things we can accomplish if we work together and encourage each other.

Machine Learning: complement or replacement of Numerical Weather Prediction? 

Emanuele Silvio Gentile – e.gentile@pgr.reading.ac.uk

Figure 1 Replica of the first 1643 Torricelli barometer [1]

Humans have tried, for millennia, to predict the weather by finding physical relationships between observed weather events, a notable example being the descent in barometric pressure used as an indicator of an upcoming precipitation event. It should come as no surprise that one of the first weather measuring instrument to be invented was the barometer, by Torricelli (see in Fig. 1 a replica of the first Torricelli barometer), nearly concurrently with a reliable thermometer. Only two hundred years later, the development of the electric telegraph allowed for a nearly instant exchange of weather data, leading to the creation of the first synoptic weather maps in the US, followed by Europe. Synoptic maps allowed amateur and professional meteorologists to look at patterns between weather data in an unprecedented effective way for the time, allowing the American meteorologists Redfield and Epsy to resolve the dispute on which way the air flowed in a hurricane (anticlockwise in the Northern Hemisphere).

Figure 2 High Resolution NWP – model concept [2]

By the beginning of the 20th century many countries around the globe started to exchange data daily (thanks to the recently laid telegraphic cables) leading to the creation of global synoptic maps, with information in the upper atmosphere provided by radiosondes, aeroplanes, and in the 1930s radars. By then, weather forecasters had developed a large set of experimental and statistical rules on how to compute the changes to daily synoptic weather maps looking at patterns between historical sets of synoptic daily weather maps and recorded meteorological events, but often, prediction of events days in advance remained challenging.

In 1954, a powerful tool became available to humans to objectively compute changes on the synoptic map over time: Numerical Weather Prediction models. NWPs solve numerically the primitive equations, a set of nonlinear partial differential equations that approximate the global atmospheric flow, using as initial conditions a snapshot of the state of the atmosphere, termed analysis, provided by a variety of weather observations. The 1960s, marked by the launch of the first satellites, enabled 5-7 days global NWP forecasts to be performed. Thanks to the work of countless scientists over the past 40 years, global NWP models, running at a scale of about 10km, can now simulate skilfully and reliably synoptic-scale and meso-scale weather patterns, such as high-pressure systems and midlatitude cyclones, with up to 10 days of lead time [3].

The relatively recent adoption of limited-area convection-permitting models (Fig. 2) has made possible even the forecast of local details of weather events. For example, convection-permitting forecasts of midlatitude cyclones can accurately predict small-scale multiple slantwise circulations, the 3-D structure of convection lines, and the peak cyclone surface wind speed [4].

However, physical processes below convection permitting resolution, such as wind gusts, that present an environmental risk to lives and livelihoods, cannot be explicitly resolved, but can be derived from the prognostic fields such as wind speed and pressure. Alternative techniques, such as statistical modelling (Malone model), haven’t yet matched (and are nowhere near to) the power of numerical solvers of physical equations in simulating the dynamics of the atmosphere in the spatio-temporal dimension.

Figure 3 Error growth over time [5]

NWPs are not without flaws, as they are affected by numerical drawbacks: errors in the prognostic atmospheric fields build up through time, as shown in Fig. 3, reaching a comparable forecast error to that of a persisted forecast, i.e. at each time step the forecast is constant, and of a climatology-based forecast, i.e. mean based on historical series of observations/model outputs. Errors build up because NWPs iteratively solve the primitive equations approximating the atmospheric flow (either by finite differences or spectral methods). Sources of these errors are: too coarse model resolution (which leads to incorrect representation of topography), long integration time steps, and small-scale/sub-grid processes which are unresolved by the model physics approximations. Errors in parametrisations of small-scale physical processes grow over time, leading to significant deterioration of the forecast quality after 48h. Therefore, high-fidelity parametrisations of unresolved physical processes are critical for an accurate simulation of all types of weather events.

Figure 4 Met Office HPC [6]

Another limitation of NWPs is the difficulty in simulating the chaotic nature of weather, which leads to errors in model initial conditions and model physics approximations that grow exponentially over time. All these limitations, combined with instability of the atmosphere at the lower and upper bound, make the forecast of rapidly developing events such as flash floods particularly challenging to predict. A further weakness of NWP forecasts is that they rely on the use of an expensive High Parallel Computing (HPC) facility (Fig. 4), owned by a handful of industrialised nations, which run coarse scale global models and high-resolution convection-permitting forecasts on domains covering area of corresponding national interest. As a result, a high resolution prediction of weather hazards, and climatological analysis remains off-limits for the vast majority of developing and third-world countries, with detrimental effects not just on first line response to weather hazards, but also for the development of economic activities such agriculture, fishing, and renewable energies in a warming climate. In the last decade, the cloud computing technological revolution led to a tremendous increase in the availability and shareability of weather data sets, which transitioned from local storage and processing to network-based services managed by large cloud computing companies, such as Amazon, Microsoft or Google, through their distributed infrastructure.

Combined with the wide availability of their cloud computing facilities, the access to weather data has become more and more democratic and ubiquitous, and consistently less dependent on HPC facilities owned by National Agencies. This transformation is not without drawbacks in case these tech giants decide to close the taps of the flow of data. During a row with the Australian government, Facebook banned access to Australian news content in Feb 2021. Although by accident, also government-related agencies such as the Bureau of Meteorology were banned, leaving citizens with restricted access to important weather information until the pages were restored. It is hoped that with more companies providing distributed infrastructure, the accessibility to vital data for citizen security will become more resilient.

The exponential accessibility of weather data sets has stimulated the development and the application of novel machine learning algorithms. As a result, weather scientists worldwide can crunch increasingly effectively multi-dimensional weather data, ultimately providing a new powerful paradigm to understand and predict the atmospheric flow based on finding relationships between the available large-scale weather datasets.

Machine learning (ML) finds meaningful representations of the patterns between the data through a series of nonlinear transformations of the input data. ML pattern recognition is distinguished into two types: supervised and unsupervised learning.

Figure5 Feed-forward neural network [6]

Supervised Learning is concerned with predicting an output for a given input. It is based on learning the relationship between inputs and outputs, using training data consisting in example input/output pairs, being divided into regression or classification, depending on the type of the output variable to be predicted (discrete or continuous). Support Vector Machine (SVM) or Regression (SVR), Artificial Neural Network (ANN, with the feed-forward step shown in Fig. 5), and Convolutional Neural Network (CNN) are examples of supervised learning.

Unsupervised learning is the task of finding patterns within data without the presence of any ground truth or labelling of the data, with a common unsupervised learning task being clustering (group of data points that are close to one another, relative to data points outside the cluster). Examples of unsupervised learning are the K-means and K-Nearest Neighbour (KNN) algorithms [7].

So far, ML algorithms have been applied to four key problems in weather prediction:  

  1. Correction of systematic error in NWP outputs, which involves post-processing data to remove biases [8]
  1. Assessment of the predictability of NWP outputs, evaluating the uncertainty and confidence scores of ensemble forecasting [9]
  1. Extreme detection, involving prediction of severe weather such as hail, gust or cyclones [10]
  1. NWP parametrizations, replacing empirical models for radiative transfer or boundary-layer turbulence with ML techniques [11]

The first key problem, which concerns the correction of systematic error in NWPs, is the most popular area of application of ML methods in meteorology. In this field, wind speed and precipitation observational data are often used to perform an ML linear regression on the NWP data with the end goal of enhancing its accuracy and resolving local details of the weather which were unresolved by NWP forecasts. Although attractive for its simplicity and robustness, linear regression presents two problems: (1) least-square methods used to solve linear regression do not scale well with the size of datasets (since matrix inversion required by least square is increasingly expensive for increasing datasets size), (2) Many relationships between variables of interest are nonlinear. Instead, classification tree-based methods have proven very useful to model non-linear weather events, from thunderstorm and turbulence detection to extreme precipitation events, and the representation of the circular nature of the wind. In fact, compared to linear regression, random trees exhibit an easy scalability with large-size datasets which have several input variables. Besides preserving the scalability to large datasets of tree-based method, ML methods such as ANN and SVM/R provide also a more generic and more powerful alternative for nonlinear-processes modelling. These improvements have come at the cost of a difficult interpretation of the underlying physical concepts that the model can identify, which is critical given that scientists need to couple these ML models with physical-equations based NWP for variable interdependence. As a matter of fact, it has proven challenging to interpret the physical meaning of the weights and nonlinear activation functions that describe in the ANN model the data patterns and relationships found by the model [12].

The second key problem, represented by the interpretation of ensemble forecasts, is being addressed by ML unsupervised learning methods such as clustering, which represents the likelihood of a forecast aggregating ensemble members by similarity. Examples include grouping of daily weather phenomena into synoptic types, defining weather regimes from upper air flow patterns, and grouping members of forecast ensembles [13].

The third key problem, which concerns the prediction of weather extremes, corresponding to weather phenomena which are a hazard to lives and economic activities, ML based methods tend to underestimate these events. The problem here lies with imbalanced datasets, since extreme events represent only a very small fraction of the total events observed [14].

The fourth key problem to which ML is currently being applied, is parametrisation. Completely new stochastic ML approaches have been developed, and their effectiveness, along with their simplicity compared to traditional empirical models has highlighted promising future applications in (moist) convection [15]

Further applications of ML methods are currently limited by intrinsic problems affecting the ML methods in relation to the challenges posed by weather data sets. While the reduction of the dimensionality of the data by ML techniques has proven highly beneficial for image pattern recognition in the context of weather data, it leads to a marked simplification of the input weather data, since it constrains the input space to individual grid cells in space or time [16]. The recent expansion of ANN into deep learning has provided new methodologies that can address these issues. This has pushed further the capability of ML models within the weather forecast domain, with CNNs providing a methodology for extracting complex patterns from large, structured datasets have been proposed, an example being the CNN model developed by Yunjie Liu in 2016 [17] to classify atmospheric rivers from climate datasets (atmospheric rivers are an important physical process for prediction of extreme rainfall events).

Figure 7 Sample images of atmospheric rivers correctly classified (true positive) by the deep CNN model in [18]

At the same time, Recursive Neural Networks (RNN), developed for natural language processing, are improving nowcasting techniques exploiting their excellent ability to work with the temporal dimension of data frames. CNN and RNN have now been combined, as illustrated in Fig. 6, providing the first nowcasting method in the context of precipitation, using radar data frames as input [18].

Figure 6 Encoding-forecasting ConvLSTM network for precipitation nowcasting [18]

While these results show a promising application of ML models to a variety of weather prediction tasks which extend beyond the area of competence of traditional NWPs, such as analysis of ensemble clustering, bias correction, analysis of climate data sets and nowcasting, they also show that ML models are not ready to replace NWP to forecast synoptic-scale and mesoscale weather patterns.

As a matter of fact, NWPs have been developed and improved for over 60 years with the very purpose to simulate very accurately and reliably the wind, pressure, temperature and other relevant prognostic fields, so it would be unreasonable to expect ML models to outperform NWPs on such tasks.

It is also true that, as noted earlier, the amount of available data will only grow in the coming decades, so it will be critical as well as strategic to develop ML models capable to extract patterns and interpret the relationships within such data sets, complementing NWP capabilities. But how long before an ML model will be capable to replace an NWP by crunching the entire set of historical observations of the atmosphere, extracting the patterns and the spatial-temporal relationships between the variables, and then performing weather forecasts?

Acknowledgement: I would like to thank my colleagues and friends Brian Lo, James Fallon, and Gabriel M. P. Perez, for reading and providing feedback on this article.

References

  1. https://collection.sciencemuseumgroup.org.uk/objects/co54518/replica-of-torricellis-first-barometer-1643-barometer-replica 
  1. https://www.semanticscholar.org/paper/High-resolution-numerical-weather-prediction-(NWP)-Allan-Bryan/a40e0ebd388b915bdd357f398baa813b55cef727/figure/6 
  1. Buizza, R., Houtekamer, P., Pellerin, G., Toth, Z., Zhu, Y. and Wei, M. (2005) A comparison of the ECMWF, MSC, and NCEP global ensemble prediction systems. Mon Weather Rev, 133, 1076 – 1097 
  1. Lean, H. and Clark, P. (2003) The effects of changing resolution on mesocale modelling of line convection and slantwise circulations in FASTEX IOP16. Q J R Meteorol Soc, 129, 2255–2278 
  1. http://www.chanthaburi.buu.ac.th/~wirote/met/tropical/textbook_2nd_edition/navmenu.php_tab_10_page_4.3.5.htm 
  1. Bishop, C., and Christopher, M., Pattern Recognition and Machine Learning, Springer 
  1. https://www.arup.com/projects/met-office-high-performance-computer 
  1. J. L. Aznarte and N. Siebert, “Dynamic Line Rating Using Numerical Weather Predictions and Machine Learning: A Case Study,” in IEEE Transactions on Power Delivery, vol. 32, no. 1, pp. 335-343, Feb. 2017, doi: 10.1109/TPWRD.2016.2543818. 
  1. Foley, Aoife M et al. (2012). “Current methods and advances in forecasting of wind power generation”. In: Renewable Energy 37.1, pp. 1–8. 
  1. McGovern, Amy et al. (2017). “Using artificial intelligence to improve real-time decision making for high-impact weather”. In: Bulletin of the American Meteorological Society 98.10, pp. 2073–2090 
  1. O’Gorman, Paul A and John G Dwyer (2018). “Using machine learning to parameterize moist convection: Potential for modeling of climate, climate change and extreme events”. In: arXiv preprint arXiv:1806.11037 
  1. Moghim, Sanaz and Rafael L Bras (2017). “Bias correction of climate modeled temperature and precipitation using artificial neural networks”. In: Journal of Hydrometeorology 18.7, pp. 1867–1884.  
  1. Camargo S J, Robertson A W Gaffney S J Smyth P and M Ghil (2007). “Cluster analysis of typhoon tracks. Part I: General properties”. In: Journal of Climate 20.14, pp. 3635–3653. 
  1. Ahijevych, David et al. (2009). “Application of spatial verification methods to idealized and NWP-gridded precipitation forecasts”. In: Weather and Forecasting 24.6, pp. 1485–1497. 
  1. Berner, Judith et al. (2017). “Stochastic parameterization: Toward a new view of weather and climate models”. In: Bulletin of the American Meteorological Society 98.3, pp. 565–588. 
  1. Fan, Wei and Albert Bifet (2013). “Mining big data: current status, and forecast to the future”. In: ACM sIGKDD Explorations Newsletter 14.2, pp. 1–5 
  1. Liu, Yunjie et al. (2016). “Application of deep convolutional neural networks for detecting extreme weather in climate datasets”. In: arXiv preprint arXiv:1605.01156. 
  1. Xingjian, SHI et al. (2015). “Convolutional LSTM network: A machine learning approach for precipitation nowcasting”. In: Advances in neural information processing systems, pp. 802–810. 

Ensemble Data Assimilation with auto-correlated model error

Haonan Ren – h.ren@pgr.reading.ac.uk

Data assimilation is a mathematical method to combine forecasts with observations, in order to improve the accuracy of the original forecast. Normally data assimilation methods are performed with the perfect-model assumption (weak-constraint). However, there are different sources that can produce model error, such as missing description of the dynamic system and numerical discretisation. Therefore, in recent years, the model error has been more often considered in the data assimilation process (strong-constraint settings). 

There are several data assimilation methods applied in various fields. My PhD project mainly focuses on the ensemble/Monte Carlo formulation of the Kalman Filter-based methods, more specifically, the ensemble Kalman Smoother (EnKS). Different from the filter, a smoother updates the state of the system using observations from the past, present and possibly the future. The smoother does not only improve the forecast at the observation time, instead, it updates the whole simulation period. 

The main purpose of my research is to investigate the performance of the data assimilation methods with auto-correlated model error. We want to know what will happen if we propose a misspecified auto-correlation in the model error, for both state update and parameter estimation. We start our project with a very simple linear auto-regressive model. As for the auto-correlation in model error, we propose an exponential decaying decorrelation. Naturally, the system has a exponential decaying parameter ωr, and the parameter we use in the forecast and data assimilation is ωg which can be different from the real one. 

A simple example can illuminate the decorrelation issue. In Figure 1, we show results of a smoothing process for a simple one-dimensional system over a time window of 20 nature time steps. We use an ensemble Kalman Smoother with two different observation densities in time. The memories in the nature model and the forecasts models do not coincide. We can see that with ωr = 0.0, when the actual model error is a white-in-time random variable, the evolution of the true state of the system behaves rather erratically with the present model settings. If we do not know the memory and assume the model error is a bias in the data assimilation process (ωg → ∞), the estimation made by the data assimilation method is not even close to the truth, even with very dense observations in the simulation period, as shown in the left two subplots in Figure 1. On the other hand, if the model error in the true model evolution behaves like a bias, and we assume that the model error is white in time in the data assimilation process, the results are quite different with different observation frequencies. As shown in two subplots on the right in Figure 1, with very frequent observations, we can see a fairly good performance of the data assimilation process, but with a single observation, the estimation is still not accurate. 

Figure 1: Plots of the trajectories of the true state of the system (black), three posterior ensemble members (pink, randomly chosen from 200 members), and the posterior ensemble mean (red). The left subplots show results for a true white noise model error and an assumed bias model error for two observation densities. Note that the posterior estimates are poor in both cases. The right subplots depict a bias true model error and an assumed white noise model error. The result with one observation is poor, while if many observations are present the assimilation result is consistent within the ensemble spread.

In order to evaluate the performance of the EnKS, we need to compare the root-mean-square error (RMSE) with the ensemble spread of the posterior. The best performance of the EnKS is when ratio of RMSE over the spread is equal to 1.0. The results are shown in Figure 2. As we can see, the Kalman Smoother works well when ωg ωr for all the cases, with the ratio of RMSE over the spread equal to 1.0. With relatively high observational frequency, 5 observations or more in the simulation window, the RMSE is larger than the spread when ωg > ωr, and vice versa. In a further investigation, the mismatch between the two timescales ωr and ωg barely has any impact on the RMSE. The ratio is dominated by the ensemble spread.

Figure 2: Ratio of MSE over the posterior variance for the 1-dimensional system, calculated using numerical evaluation of the exact analytical expressions. The different panels show results for different numbers of observations.

Then, we move to estimating the parameter encoded in the auto-correlation of the model error. We estimate the exponential decaying parameter by the state augmentation using the EnKS, and the results are shown in Figure 3. Instead of the exponential parameter, ωg, we use the log scale of the memory timescale to avoid negative memory estimates. The initial log-timescale values are drawn from a normal distribution: ln ωgi ∈ N (ln ωg, 1.0). Hence we assume that the prior distribution of the memory time scale is lognormal distributed. According to Figure 3, with an increasing number of windows we obtain better estimates. Also, the convergence is faster with more observations. And in some cases the solution did not converge to the correct value. This is not surprising given the highly nonlinear character of the parameter estimation problem, especially with only one observation per window. When we observe every time step the convergence is much faster, and the variance in the estimate decreases, as shown in the lower two subplots. In this case we always found fast convergence with different first guess and true timescale combinations, demonstrating that more observations bring us closer to the truth, and hence make the parameter estimation problem more linear.

Figure 3: PDFs of the prior (blue) and posterior (reddish colours) estimated ωg, using an increasing number of assimilation windows. The different panels show results for different observation densities and prior mean larger (a, c) or smaller (b, d) than ωr. The vertical black line denotes the true value ωr.

As the results of the experiments show, the influence of an incorrect decorrelation timescale in the model error can be significant. We found that when the observation density is high, state augmentation is sufficient to obtain converging results. The new element is that online estimation is possible beyond a relatively simple bias estimate of the model error.

As a next step we will explore the influence of incorrectly specified model errors in nonlinear systems, and a more complex auto-correlation in the model error.

Fluid Dynamics Summer School 

Charlie Suitters – c.c.suitters@pgr.reading.ac.uk 

Every year, Cambridge and École Polytechnique in Paris alternate hosting duties of the Fluid Dynamics of Sustainability and the Environment (FDSE) summer school. This ran for two weeks earlier in September, and like many other things took place online. After talking to previous attendees of the summer school, I went into the fortnight with excitement but also trepidation, as I had heard that it has an intense programme! Here is my experience of a thoroughly enjoyable couple of weeks. 

Structure 

The summer school brought together around 50 PhD students and a few postdocs from all over the world, from Japan to Europe to Arizona, and I have to admire the determination of those students who attended the school at unsociable times of the day! We all came from different backgrounds – some had a meteorological background like myself, but there were also oceanographers, fluid dynamicists, engineers and geographers to name but a few. It was great to hear from so many students who are passionate about their work in two brief ice-breaker sessions where we introduced ourselves to the group and I got to appreciate how wide-reaching the FDSE community is. 

Each day consisted of four 1-hour lectures – normally three ‘core’ subjects (fluid dynamics basics, atmospheric dynamics, climate, oceanography, etc.) and one guest lecturer per day (including our very own Sue Gray who gave us a whistle-stop tour of the mesoscale and extratropical cyclones). After this, there was the opportunity to split into breakout groups and speak to the day’s lecturers to ask them questions and spark discussions in small groups. On the final day, we also had a virtual tour of the various fluid dynamics labs that Cambridge has (there are a lot!) and a few of the students in the labs spoke about their work. 

Core Lectures 

Figure 1. Demonstration of a density current (blue) of salty water in a tank of less dense tap water. Taken from Jean-Marc Chomaz’s lecture

These lectures were given by very engaging specialists including Colm-Cille Caulfield, John Taylor, Alison Ming, Jerome Neufeld and Jean-Marc Chomaz; and provided the perfect opportunity to see lots of pretty videos about fluid flows (Fig. 1). Having done an undergraduate course in Meteorology, a lot of these gave me a refresher of things I should already know, but it was refreshing to see how other lecturers approach the same material. 

The most interesting core lectures to me were those regarding renewable energy, given by Riwal Plougonuen and Alex Stegner. Plougonuen lectured us on wind turbines, telling us how they worked and why they are designed like they are – did you know that actually the most efficient wind turbines have 2 blades, but the vast majority have three for better structural stability? On the other hand, Stegner spoke to us about hydroelectricity, and I learned that Norway produces nearly all of its electricity through hydropower. Other highlights from these core lectures include watching a video of a research hut being swamped by an avalanche (Nathalie Vriend, see video at the link here), and seeing Jerome Neufeld demonstrate ice flows using golden syrup (he likes his food!) 

Guest Lectures 

Figure 2. Flow patterns around a sash window with both slots open – the blue arrows showing incoming cold air and the red arrows showing warm flow to the outside. Taken from Megan Davies Wykes’ lecture.

For me, the guest lectures were the highlights of my time at the summer school. These lectures often explored things beyond my area of expertise, and demonstrated just how the fluid mechanics we had learned are highly applicable to many different areas of life. We had a lecture about building ventilation from Megan Davies Wykes, which made me realise that adequately ventilating a room is more than simply cracking open a window – you have to consider everything from the size of the room, outside wind speed, how many windows there are, and even the body heat from people inside the room. Davies Wykes’s passion about people using their sash windows correctly will always stick with me – turns out you need to open both the top and the bottom panes for the best ventilation (something she emphasised more than once!), see Fig. 2.  

Figure 3. Demonstration of how droplets and plumes of air from the mouth are kept closer to the body when wearing a mask (Bhagat et al. 2020).

Fittingly, we also had a lecture from Paul Linden about the transmission of Covid, and he demonstrated how effective masks are at preventing transmission using a great visualisation (Fig. 3). It was great to have topics such as these that are relevant in today’s world, and provided yet another real-world application of the fluid dynamics we had learned. 

Breakout Discussion Sessions 

Every afternoon, the day’s lecturers returned and invited us to ask them questions about their lectures, or just have an intelligent discussion about their area of expertise. Admittedly these sessions could get a little awkward when everyone was too tired to ask anything towards the end of the long two weeks, but these sessions were still incredibly useful. They provided us the means to speak to a professional in their field about their research, and allowed us time to network and ask them some challenging questions. 

Concluding Remarks 

Of course, over the course of the two weeks we learned so much more than what I described above, and yet again demonstrates the versatility of the field! The summer school as a whole was organised really well and the lecturers were engaging and genuinely interested in hearing about us and our projects. I would highly recommend attending this summer school next year to any PhD student – the scope of the school was so broad that I am sure there will be something for everyone in the programme, and fingers crossed it goes ahead in Paris next year! 

References 

Bhagat, R., Davies Wykes, M., Dalziel, S., & Linden, P. (2020). Effects of ventilation on the indoor spread of COVID-19. Journal of Fluid Mechanics, 903, F1. doi:10.1017/jfm.2020.720 

Diagnosing solar wind forecast errors

Harriet Turner – h.turner3@pgr.reading.ac.uk

The solar wind is a continual outflow of charged particles that comes off the Sun, ranging in speed from 250 to 800 km s-1. During the first six months of my PhD, I have been investigating the errors in a type of solar wind forecast that uses spacecraft observations, known as corotation forecasts. This was the topic of my first paper, where I focussed on extracting the forecast error that occurs due to a separation in the spacecraft latitude. I found that up to a latitudinal separation of 6 degrees, the error contribution was approximately constant. Above 6 degrees, the error contribution increases as the latitudinal separation increases. In this blog post I will explain the importance of forecasting the solar wind and the principle behind corotation forecasts. I will also explain how this work has wider implications for future space missions and solar wind forecasting.

The term “space weather” refers to the changing conditions in near-Earth space. Extreme space weather events can cause several effects on Earth, such as damaging power grids, disrupting communications, knocking out satellites and harming the health of humans in space or on high-altitude flights (Cannon, 2013). These effects are summarised in Figure 1. It is therefore important to accurately forecast space weather to help mitigate against these effects. Knowledge of the background solar wind is an important aspect of space weather forecasting as it modulates the severity of extreme events. This can be achieved through three-dimensional computer simulations or through more simple methods, such as corotation forecasts as discussed below.

Figure 1. Cosmic rays, solar energetic particles, solar flare radiation, coronal mass ejections and energetic radiation belt particles cause space weather. Subsequently, this produces a number of effects on Earth. Source: ESA.

Solar wind flow is mostly radial away from the Sun, however the fast/slow structure of the solar wind rotates round with the Sun. If you were looking down on the ecliptic plane (where the planets lie, at roughly the Sun’s equator), then you would see a spiral shape of fast and slow solar wind, as in Figure 2. This makes a full rotation in approximately 27 days. As this rotates around, it allows us to use observations on this plane as a forecast for a point further on in that rotation, assuming a steady-state solar wind (i.e., the solar wind does not evolve in time). For example, in Figure 2, an observation from the spacecraft represented by the red square could be used as a forecast at Earth (blue circle), some time later. This time depends on the longitudinal separation between the two points, as this determines the time it takes for the Sun to rotate through that angle.

Figure 2. The spiral structure of the solar wind, which rotates anticlockwise. Here, STA and STB are the STEREO-A and STEREO-B spacecraft respectively. The solar wind shown here is the radial component. Source: HUXt model (Owens et al, 2020).

In my recent paper I have been investigating how the corotation forecast error varies with the latitudinal separation of the observation and forecast points.  Latitudinal separation varies throughout the year, and it was theorised that it should have a significant impact on the accuracy of corotation forecasts. I used the two spacecraft from the STEREO mission, which are on the same plane as Earth, and a dataset for near-Earth. This allowed for six different configurations to compute corotation forecasts, with a maximum latitudinal separation of 14 degrees. I analysed the 18-month period from August 2009 to February 2011 to help eliminate other affecting variables. Figure 3 shows the relationship between forecast error and latitudinal separation. Up to approximately 6 degrees, there is no significant relationship between error and latitudinal separation. Above this, however, the error increases approximately linearly with the latitudinal separation.

Figure 3. Variation of forecast error with the latitudinal separation between the spacecraft making the observation and the forecast location. Error bars span one standard error on the mean.

This work has implications for the future Lagrange space weather monitoring mission, due for launch in 2027. The Lagrange spacecraft will be stationed in a gravitational null, 60degrees in longitude behind Earth on the ecliptic plane. Gravitational nulls occur when the gravitational fields between two or more massive bodies balance out. There are five of these nulls, called the Lagrange points, and locating a spacecraft at one reduces the amount of fuel needed to stay in position. The goal of the Lagrange mission is to provide a side-on view of the Sun-Earth line, but it also presents an opportunity for consistent corotation forecasts to be generated at Earth. However, the Lagrange spacecraft will oscillate in latitude compared to Earth, up to a maximum of about 5 degrees. My results indicate that the error contribution from latitudinal separation would be approximately constant.

The next steps are to use this information to help improve the performance of solar wind data assimilation. Data assimilation (DA) has led to large improvements in terrestrial weather forecasting and is beginning to be used in space weather forecasting. DA combines observations and model output to find an optimum estimation of reality. The latitudinal information found here can be used to inform the DA scheme how to better handle the observations and to, hopefully, produce an improved solar wind representation.

The work I have discussed here has been accepted into the AGU Space Weather journal and is available at https://agupubs.onlinelibrary.wiley.com/doi/epdf/10.1029/2021SW002802.

References

Cannon, P.S., 2013. Extreme space weather – A report published by the UK royal academy of engineering. Space Weather, 11(4), 138-139.  https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/swe.20032

ESA, 2018. https://www.esa.int/ESA_Multimedia/Images/2018/01/Space_weather_effects 

Owens, M.J., Lang, M.S., Barnard, L., Riley, P., Ben-Nun, M., Scott, C.J., Lockwood, M., Reiss, M.A., Arge, C.N. & Gonzi, S., 2020. A Computationally Efficient, Time-Dependent Model of the Solar Wind for use as a Surrogate to Three-Dimensional Numerical Magnetohydrodynamic Simulations. Solar Physics, 295(3), https://doi.org/10.1007/s11207-020-01605-3