Research

We study different aspects of the predictability of weather and climate as complex physical systems.

Research Areas

Assessing model uncertainty in weather and climate forecasts

All weather and climate forecasts are subject to uncertainties arising from imperfect knowledge of initial and boundary conditions, and from model errors associated with missing or poorly resolved processes. Handling the uncertainty that arises from model errors is very challenging and no widely accepted methodology exists. Here we develop and test different strategies to address model uncertainty in GCM-based weather and climate forecasts. We aim to identify, understand and constrain the physical processes that dominate forecast uncertainty. We particularly focus on uncertainties that are relevant to both weather and climate forecasting, e.g. by taking a seamless prediction approach. Our ultimate goal is to improve the prospects for providing more reliable probabilistic weather and climate forecasts to users.

Publications:

High-resolution modelling

It is now understood that many small-scale processes have a profound influence on large-scale climate and climate variability. Including smaller scale processes in our numerical models of the weather and climate allows us to better understand the contribution of the smaller-scales to large-scale circulation. Smaller scale processes can be explicitly included in these models by increasing the horizontal resolution. High horizontal resolution is very expensive in terms of computation time and resources, and it is therefore important that we develop a good understanding of the impact of increased horizontal resolution in order to justify its use in operational models.

Horizontal Resolution: The four horizontal resolutions of the ECMWF IFS integrated under the Athena Project.Horizontal Resolution: The four horizontal resolutions of the ECMWF IFS integrated under the Athena Project.

Predictability on seasonal time scales

Atmospheric predictability on seasonal time scales arises from the slowly varying boundary conditions like the ocean or the land surface. The most prominent example of a coupled ocean-atmosphere process on these time scales is the El Nino Southern Oscillation (ENSO). ENSO is the single largest source for seasonal predictability in many regions of the world.

Our research interests include:

  • skill assessment of ECMWF's dynamical seasonal forecasting system S4
  • multi-model seasonal forecasts
  • forecast quality assessment of the ENSEMBLES seasonal-to-interannual forecasts
  • case studies

Publications:

Predictability on decadal time scales

The climate system exhibits variability on a variety of timescales. Decadal predictability is a relatively new area of research trying to explore impacts from the atmospheric and oceanic initial conditions as well as from the boundary forcings on near-term climate predictions. The next Intergovernmental Panel on Climate Change (IPCC) 5th Assessment Report (AR5), due in 2013, will have in its Working Group 1 volume on the Physical Science Basis a dedicated chapter on "Near-Term Climate Change: Projections and Predictability".

Our latest study by Corti et al. (2012) we assess the reliability of decadal predictions of SST and 2m temperature based on a 54-member ensemble of the ECMWF coupled model. It was shown that the reliability from the ensemble system is good over global land areas, Europe and Africa and for the North Atlantic, Indian Ocean and, to a lesser extent, North Pacific basins for lead times up to 6–9 years. North Atlantic SSTs are reliably predicted even when the climate trend is removed, consistent with the known predictability for this region. By contrast, reliability in the Indian Ocean, where external forcing accounts for most of the variability, deteriorates severely after de-trending. More conventional measures of forecast quality, such as the anomaly correlation coefficient of the ensemble mean, are also considered, showing that the ensemble has significant skill in predicting multi-annual temperature averages.

Publications:

Inexact but efficient computing hardware

We study the use of inexact hardware in numerical weather and climate models. Inexact hardware is promising a reduction of computational cost and power consumption of supercomputers and could be a shortcut to higher resolution forecasts with higher forecast accuracy. However, simulations with inexact hardware show numerical errors, such as rounding errors or bit flips. We want to treat different parts of the modelled atmospheric dynamics and physics with customized computational accuracy to reflect their inherent uncertainties. Planetary scale waves are more predictable and less uncertain than meso-scale waves. For small-scale dynamics, diffusion, parametrisation schemes, and sub-grid-scale variability cause large inherent uncertainties. However, the smallest scales occupy overwhelmingly more degrees of freedom in an atmospheric model compared with the large and predictable scales. We investigate an approach of scale separation that calculates the dynamics of expensive small scales with low numerical precision while the dynamics of large scales are calculated with high precision.

Publications that provide an overview:

In a first step, we investigated the use of inexact hardware in a spectral dynamical core of an atmosphere model (a model called IGCM) for which we emulated the use of two different types of inexact hardware within the simulations. We could show that large parts of the model can tolerate a strong reduction in accuracy and that the concept of scale separation is very powerful and a valid approach. We published two scientific papers on results from this setup that investigate weather and climate type simulations and showed that precision can be reduced from 64 bits in double precision to only 15 bits to calculate parts of the model that cause 98% of the computational cost of the double precision control simulation.

Relevant publications:

Recent hardware developments provide several approaches to trade numerical precision against performance that could potentially be used in earth-system modelling. We worked together with hardware developers at Rice University (USA) and at EPFL (Switzerland) to study the use of so-called pruned chips that reduce the physical size of the floating point unit by removing all parts that are either hardly used or do not have a strong impact on significant parts of the calculation. In this collaboration, we developed a customised floating point unit to calculate the Lorenz '95 model (a toy model for atmospheric dynamics). Again, scale separation was essential to reduce precision significantly in large parts of the code. We could show that about 75% of the computational cost for the control simulation in double precision could be put on hardware that has up to 66% savings in power and 26% savings in delay (increase in performance) for the floating point adder-subtractor block and 92% savings in power and 19% savings in delay for the floating point multiplier block with only very small degradations on results of model simulations. If scale separation was ignored, the whole model could be calculated with hardware that showed 49 and 16% savings for power and delay for the floating point adder-subtractor block and 75 and 22% savings for power and delay for the multiplier block. Although complete hardware setups for pruned floating point units have been developed during the study, the use of the inexact hardware was still emulated within the model since the production of prototype hardware is extremely expensive.

Relevant publications:

The second approach to trade precision against performance on existing hardware is investigating the use of Field-Programmable Gate Arrays (FPGAs) in earth-system modelling. FPGAs are integrated circuit designs that can be customised by the user (a “programmable” hardware). The use of FPGAs is promising a large increase in performance and FPGAs allow the use of variable levels of floating point precision. However, it is still hard work to enable a numerical model to run on an FPGA and the model code needs to be rewritten in an appropriate computing language. We cooperated with a working group at Imperial College London and enabled the Lorenz '95 model to run on an FPGA. We used this setup to investigate different approaches to find the minimal acceptable level of precision that can be used with no strong reduction of model quality in short and long term simulations. We found that it is reasonable to investigate model errors in cheap short term simulations (e.g. simulations with only 50 timesteps) to obtain a range for the level of precision that can be used in expensive long term simulations. We could show that an approach that is reducing precision with increasing forecast time in weather type simulations is very promising. A reduction in precision after the model error has already grown, due to model errors and initial condition uncertainty, can be much stronger than a reduction of precision directly after initialisation. On the FPGA, we could also show that a reduction of precision can reduce computing time by almost a factor of two in comparison to single precision simulations with no strong increase of the model error on the FPGA. A simulation with reduced precision will be more than 40 times faster on the FPGA than the same calculation on a single CPU in single precision.

Relevant publications:

  • Peter D. Düben, Francis P. Russell, Xinyu Niu, Wayne Luk, T. N. Palmer (2014), Minimal numerical precision in simulations of a chaotic system on an FPGA, in preparation and forwarded upon request

We also investigated the influence of rounding errors on model simulations in a collaboration with the Goethe University Frankfurt. If inexact hardware is used, the forcing due to rounding errors in the governing differential equations will have a certain distribution with specific mean, variance and standard deviation and can be expected to be fairly uncorrelated in space and time if a time step involves several floating point operations and floating point numbers of different magnitude and if rounding errors are reasonably small. We studied the properties of rounding errors and compared the magnitude of rounding errors to the forcing of a stochastic parametrisation scheme in a model of the 1D Burgers equation with stochastic closure. We show that the stochastic parts of the stochastic parametrisation scheme can be used to hide hardware errors and that the magnitude of the stochastic forcing can serve as a first guess for the upper limit of the magnitude of rounding errors that allow simulations with no changes of model quality. We use the similarity of rounding errors to additive and multiplicative noise and engineer rounding errors in model simulations with emulated inexact hardware to fit the distribution of the stochastic forcing of the stochastic parametrisation scheme as close as possible, by adding a small number of operations to the calculation of each prognostic variable in each time step. We use the engineered rounding errors to replace the stochastic forcing of the stochastic parametrisation scheme and obtain model results that show a similar quality compared to the double precision control simulation with stochastic forcing. We argue that rounding errors can be beneficial to numerical simulations for the same reasons stochastic forcings can be beneficial in weather and climate models in stochastic parametrisation schemes. Finally, we show that rounding errors allow new perspectives for ensemble methods since a very small initial perturbation, within the order of magnitude of the rounding error, is causing a changed forcing by rounding errors over the entire simulation that appears to be sufficiently uncorrelated for each ensemble member.

Relevant publications:

  • Peter D. Düben, Stamen Dolaptchiev (2014), A comparison between rounding errors of inexact hardware and a stochastic forcing of a stochastic parametrisation scheme, in preparation and forwarded upon request

Development of stochastic parametrisations

Description.

Publications:

  • Shutts, G., M. Leutbecher, A. Weisheimer, T. Stockdale, L. Isaksen and M. Bonavita (2011). Representing model uncertainty: stochastic parametrizations at ECMWF. ECMWF Newsletter, 129, 19-24. [pdf]
  • Palmer, T.N., R. Buizza, F. Doblas-Reyes, T. Jung, M. Leutbecher, G.J. Shutts, M. Steinheimer and A. Weisheimer (2009b). Stochastic parametrization and model uncertainty. ECMWF Tech. Memo. 598, 42pp. [pdf]
  • Berner, J., F.J. Doblas-Reyes, T.N. Palmer, G. Shutts and A. Weisheimer (2009). Impact of a quasi-stochastic cellular automaton backscatter scheme on the systematic error and seasonal prediction skill of a global climate model. In "Stochastic Physics and Climate Modelling", by T.N. Palmer and P. Williams (Eds.), Chapter 15, 375-395, Cambridge University Press.

Seamless prediction of weather and climate

Publications:

Climate change

Description.

Publications:

Predictability of Weather and Climate group wiki page

We have a wiki page for internal use with information about

  • the cirrus cluster
  • the use of the ECMWF computing systems