# Publications by Tim Palmer

## Assessing the robustness of multidecadal variability in Northern Hemisphere wintertime seasonal forecast skill

Quarterly Journal of the Royal Meteorological Society Wiley **146** (2020) qj.3890

Recent studies have found evidence of multidecadal variability in northern hemisphere wintertime seasonal forecast skill. Here we assess the robustness of this finding by extending the analysis to analysing a diverse set of ensemble atmospheric model simulations. These simulations differ in either numerical model or type of initialisation and include atmospheric model experiments initialised with reanalysis data and free‐running atmospheric model ensembles. All ensembles are forced with observed SST and seaice boundary conditions. Analysis of large‐scale Northern Hemisphere circulation indicesover the Northern Hemisphere (namely the North Atlantic Oscillation, Pacific North American pattern and the Arctic Oscillation) reveals that in all ensembles there is larger correlation skill in the late century periods than during periods in the mid‐century. Similar multidecadal variability in skill is found in a measure of total skill integrated over the whole of the extratropics. Most of the differences in large‐scale circulation skill between the skillful late period (as well as early period) and the less skillful mid‐century period seem to be due to a reduction in skill over the North Pacific and a disappearance in skill over North America and the North Atlantic. The results are robust across different models and different types of initialisation, indicating that the multidecadal variability in Northern Hemisphere winter skill is a robust feature of 20th century climate variability. Multidecadal variability in skill therefore arises from the evolution of the observed SSTs, likely related to a weakened influence of ENSO on the predictable extratropical circulation signal during the middle of the 20th century, and is evident in the signal‐to‐noise ratio of the different ensembles, particularly the larger ensembles.

## Beyond skill scores: exploring sub‐seasonal forecast value through a case‐study of French month‐ahead energy prediction

Quarterly Journal of the Royal Meteorological Society Wiley **146** (2020) 3623-3637

We quantify the value of sub‐seasonal forecasts for a real‐world prediction problem: the forecasting of French month‐ahead energy demand. Using surface temperature as a predictor, we construct a trading strategy and assess the financial value of using meteorological forecasts, based on actual energy demand and price data. We show that forecasts with lead times greater than two weeks can have value for this application, both on their own and in conjunction with shorter‐range forecasts, especially during boreal winter. We consider a cost/loss framework based on this example, and show that, while it captures the performance of the short‐range forecasts well, it misses the marginal value present in medium‐range forecasts. We also contrast our assessment of forecast value to that given by traditional skill scores, which we show could be misleading if used in isolation. We emphasise the importance of basing assessment of forecast skill on variables actually used by end‐users.

## Resilience in the developing world benefits everyone

NATURE CLIMATE CHANGE **10** (2020) 794-795

## Short-term tests validate long-term estimates of climate change

Nature Springer Nature **582** (2020) 185-186

## Rethinking superdeterminism

Frontiers in Physics Frontiers **8** (2020) 139

Quantum mechanics has irked physicists ever since its conception more than 100 years ago. While some of the misgivings, such as it being unintuitive, are merely aesthetic, quantum mechanics has one serious shortcoming: it lacks a physical description of the measurement process. This “measurement problem” indicates that quantum mechanics is at least an incomplete theory—good as far as it goes, but missing a piece—or, more radically, is in need of complete overhaul. Here we describe an approach which may provide this sought-for completion or replacement: Superdeterminism. A superdeterministic theory is one which violates the assumption of Statistical Independence (that distributions of hidden variables are independent of measurement settings). Intuition suggests that Statistical Independence is an essential ingredient of any theory of science (never mind physics), and for this reason Superdeterminism is typically discarded swiftly in any discussion of quantum foundations. The purpose of this paper is to explain why the existing objections to Superdeterminism are based on experience with classical physics and linear systems, but that this experience misleads us. Superdeterminism is a promising approach not only to solve the measurement problem, but also to understand the apparent non-locality of quantum physics. Most importantly, we will discuss how it may be possible to test this hypothesis in an (almost) model independent way.

## Discretization of the Bloch sphere, fractal invariant sets and Bell's theorem.

Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences The Royal Society **476** (2020) 20190350

An arbitrarily dense discretization of the Bloch sphere of complex Hilbert states is constructed, where points correspond to bit strings of fixed finite length. Number-theoretic properties of trigonometric functions (not part of the quantum-theoretic canon) are used to show that this constructive discretized representation incorporates many of the defining characteristics of quantum systems: completementarity, uncertainty relationships and (with a simple Cartesian product of discretized spheres) entanglement. Unlike Meyer's earlier discretization of the Bloch Sphere, there are no orthonormal triples, hence the Kocken-Specker theorem is not nullified. A physical interpretation of points on the discretized Bloch sphere is given in terms of ensembles of trajectories on a dynamically invariant fractal set in state space, where states of physical reality correspond to points on the invariant set. This deterministic construction provides a new way to understand the violation of the Bell inequality without violating statistical independence or factorization, where these conditions are defined solely from states on the invariant set. In this finite representation, there is an upper limit to the number of qubits that can be entangled, a property with potential experimental consequences.

## Reduced-precision parametrization: lessons from an intermediate-complexity atmospheric model

Quarterly Journal of the Royal Meteorological Society Wiley **146** (2020) 1590-1607

Reducing numerical precision can save computational costs which can then be reinvested for more useful purposes. This study considers the effects of reducing precision in the parametrizations of an intermediate complexity atmospheric model (SPEEDY). We find that the difference between double-precision and reduced-precision parametrization tendencies is proportional to the expected machine rounding error if individual timesteps are considered. However, if reduced precision is used in simulations that are compared to double-precision simulations, a range of precision is found where differences are approximately the same for all simulations. Here, rounding errors are small enough to not directly perturb the model dynamics, but can perturb conditional statements in the parametrizations (such as convection active/inactive) leading to a similar error growth for all runs. For lower precision, simulations are perturbed significantly. Precision cannot be constrained without some quantification of the uncertainty. The inherent uncertainty in numerical weather and climate models is often explicitly considered in simulations by stochastic schemes that will randomly perturb the parametrizations. A commonly used scheme is stochastic perturbation of parametrization tendencies (SPPT). A strong test on whether a precision is acceptable is whether a low-precision ensemble produces the same probability distribution as a double-precision ensemble where the only difference between ensemble members is the model uncertainty (i.e., the random seed in SPPT). Tests with SPEEDY suggest a precision as low as 3.5 decimal places (equivalent to half precision) could be acceptable, which is surprisingly close to the lowest precision that produces similar error growth in the experiments without SPPT mentioned above. Minor changes to model code to express variables as anomalies rather than absolute values reduce rounding errors and low-precision biases, allowing even lower precision to be used. These results provide a pathway for implementing reduced-precision parametrizations in more complex weather and climate models.

## Single-precision in the tangent-linear and adjoint models of incremental 4D-VAr

Monthly Weather Review American Meteorological Society **148** (2020) 1541-1552

The use of single-precision arithmetic in ECMWF’s forecasting model gave a 40% reduction in wall-clock time over double-precision, with no decrease in forecast quality. However, using reduced-precision in 4D-Var data assimilation is relatively unexplored and there are potential issues with using single-precision in the tangent-linear and adjoint models. Here, we present the results of reducing numerical precision in an incremental 4D-Var data assimilation scheme, with an underlying two-layer quasigeostrophic model. The minimizer used is the conjugate gradient method. We show how reducing precision increases the asymmetry between the tangent-linear and adjoint models. For ill-conditioned problems, this leads to a loss of orthogonality among the residuals of the conjugate gradient algorithm, which slows the convergence of the minimization procedure. However, we also show that a standard technique, reorthogonalization, eliminates these issues and therefore could allow the use of single-precision arithmetic. This work is carried out within ECMWF’s data assimilation framework, the Object Oriented Prediction System.

## The physics of numerical analysis: a climate modelling case study

Philosophical Transactions A: Mathematical, Physical and Engineering Sciences Royal Society **378** (2020) 20190058

The case is made for a much closer synergy between climate science, numerical analysis and computer science. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'.

## Human creativity and consciousness: unintended consequences of the brain's extraordinary energy efficiency?

Entropy MDPI **22** (2020) 281

It is proposed that both human creativity and human consciousness are (unintended) consequences of the human brain's extraordinary energy efficiency. The topics of creativity and consciousness are treated separately, though have a common sub-structure. It is argued that creativity arises from a synergy between two cognitive modes of the human brain (which broadly coincide with Kahneman's Systems 1 and 2). In the first, available energy is spread across a relatively large network of neurons, many of which are small enough to be susceptible to thermal (ultimately quantum decoherent) noise. In the second, available energy is focussed on a smaller subset of larger neurons whose action is deterministic. Possible implications for creative computing in silicon are discussed. Starting with a discussion of the concept of free will, the notion of consciousness is defined in terms of an awareness of what are perceived to be nearby counterfactual worlds in state space. It is argued that such awareness arises from an interplay between memories on the one hand, and quantum physical mechanisms (where, unlike in classical physics, nearby counterfactual worlds play an indispensable dynamical role) in the ion channels of neural networks, on the other. As with the brain's susceptibility to noise, it is argued that in situations where quantum physics plays a role in the brain, it does so for reasons of energy efficiency. As an illustration of this definition of consciousness, a novel proposal is outlined as to why quantum entanglement appears to be so counter-intuitive.

## Optimising the use of ensemble information in numerical weather forecasts of wind power generation

Environmental Research Letters IOP Publishing **14** (2019) 124086

Electricity generation output forecasts for wind farms across Europe use numerical weather prediction (NWP) models. These forecasts influence decisions in the energy market, some of which help determine daily energy prices or the usage of thermal power generation plants. The predictive skill of power generation forecasts has an impact on the profitability of energy trading strategies and the ability to decrease carbon emissions. Probabilistic ensemble forecasts contain valuable information about the uncertainties in a forecast. The energy market typically takes basic approaches to using ensemble data to obtain more skilful forecasts. There is, however, evidence that more sophisticated approaches could yield significant further improvements in forecast skill and utility.In this letter, the application of ensemble forecasting methods to the aggregated electricity generation output for wind farms across Germany is investigated using historical ensemble forecasts from the European Centre for Medium-Range Weather Forecasting (ECMWF). Multiple methods for producing a single forecast from the ensemble are tried and tested against traditional deterministic methods. All the methods exhibit positive skill, relative to a climatological forecast, out to a lead time of at least seven days. A wind energy trading strategy involving ensemble data is implemented and produces significantly more profit than trading strategies based on single forecasts. It is thus found that ensemble spread is a good predictor for wind power forecast uncertainty and is extremely valuable at informing wind energy trading strategy.

## The impact of a stochastic parameterization scheme on climate sensitivity in EC‐Earth

Journal of Geophysical Research: Atmospheres American Geophysical Union **124** (2019) 12726-12740

Stochastic schemes, designed to represent unresolved sub-grid scale variability, are frequently used in short and medium-range weather forecasts, where they are found to improve several aspects of the model. In recent years, the impact of stochastic physics has also been found to be beneficial for the model's long term climate. In this paper, we demonstrate for the first time that the inclusion of a stochastic physics scheme can notably affect a model's projection of global warming, as well as its historical climatological global temperature. Specifically, we find that when including the 'stochastically perturbed parametrisation tendencies' scheme (SPPT) in the fully coupled climate model EC-Earth v3.1, the predicted level of global warming between 1850 and 2100 is reduced by 10% under an RCP8.5 forcing scenario. We link this reduction in climate sensitivity to a change in the cloud feedbacks with SPPT. In particular, the scheme appears to reduce the positive low cloud cover feedback, and increase the negative cloud optical feedback. A key role is played by a robust, rapid increase in cloud liquid water with SPPT, which we speculate is due to the scheme's non-linear interaction with condensation.

## The scientific challenge of understanding and estimating climate change.

Proceedings of the National Academy of Sciences of the United States of America **116** (2019) 24390-24395

Given the slow unfolding of what may become catastrophic changes to Earth's climate, many are understandably distraught by failures of public policy to rise to the magnitude of the challenge. Few in the science community would think to question the scientific response to the unfolding changes. However, is the science community continuing to do its part to the best of its ability? In the domains where we can have the greatest influence, is the scientific community articulating a vision commensurate with the challenges posed by climate change? We think not.

## Progress Towards a Probabilistic Earth System Model: Examining The Impact of Stochasticity in EC-Earth v3.2

Geoscientific Model Development European Geosciences Union **12** (2019) 3099-3118

We introduce and study the impact of three stochastic schemes in the EC-Earth climate model: two atmospheric schemes and one stochastic land scheme. These form the basis for a probabilistic Earth system model in atmosphere-only mode. Stochastic parametrization have become standard in several operational weather-forecasting models, in particular due to their beneficial impact on model spread. In recent years, stochastic schemes in the atmospheric component of a model have been shown to improve aspects important for the models long-term climate, such as El Niño–Southern Oscillation (ENSO), North Atlantic weather regimes, and the Indian monsoon. Stochasticity in the land component has been shown to improve the variability of soil processes and improve the representation of heatwaves over Europe. However, the raw impact of such schemes on the model mean is less well studied. It is shown that the inclusion of all three schemes notably changes the model mean state. While many of the impacts are beneficial, some are too large in amplitude, leading to significant changes in the model's energy budget and atmospheric circulation. This implies that in order to maintain the benefits of stochastic physics without shifting the mean state too far from observations, a full re-tuning of the model will typically be required.

## Stochastic weather and climate models

Nature Reviews Physics Springer Science and Business Media LLC **1** (2019) 463-471

## A Stochastic Representation of Subgrid Uncertainty for Dynamical Core Development

Bulletin of the American Meteorological Society American Meteorological Society **100** (2019) 1091-1101

Numerical weather prediction and climate models comprise a) a dynamical core describing resolved parts of the climate system and b) parameterizations describing unresolved components. Development of new subgrid-scale parameterizations is particularly uncertain compared to representing resolved scales in the dynamical core. This uncertainty is currently represented by stochastic approaches in several operational weather models, which will inevitably percolate into the dynamical core. Hence, implementing dynamical cores with excessive numerical accuracy will not bring forecast gains, may even hinder them since valuable computer resources will be tied up doing insignificant computation, and therefore cannot be deployed for more useful gains, such as increasing model resolution or ensemble sizes. Here we describe a low-cost stochastic scheme that can be implemented in any existing deterministic dynamical core as an additive noise term. This scheme could be used to adjust accuracy in future dynamical core development work. We propose that such an additive stochastic noise test case should become a part of the routine testing and development of dynamical cores in a stochastic framework. The overall key point of the study is that we should not develop dynamical cores that are more precise than the level of uncertainty provided by our stochastic scheme. In this way, we present a new paradigm for dynamical core development work, ensuring that weather and climate models become more computationally efficient. We show some results based on tests done with the European Centre for Medium-Range Weather Forecasts (ECMWF) Integrated Forecasting System (IFS) dynamical core.

## Accelerating high-resolution weather models with deep-learning hardware

PASC '19 Proceedings of the Platform for Advanced Scientific Computing Conference Association for Computing Machinery (2019)

The next generation of weather and climate models will have an unprecedented level of resolution and model complexity, and running these models efficiently will require taking advantage of future supercomputers and heterogeneous hardware. In this paper, we investigate the use of mixed-precision hardware that supports floating-point operations at double-, single- and half-precision. In particular, we investigate the potential use of the NVIDIA Tensor Core, a mixed-precision matrix-matrix multiplier mainly developed for use in deep learning, to accelerate the calculation of the Legendre transforms in the Integrated Forecasting System (IFS), one of the leading global weather forecast models. In the IFS, the Legendre transform is one of the most expensive model components and dominates the computational cost for simulations at a very high resolution. We investigate the impact of mixed-precision arithmetic in IFS simulations of operational complexity through software emulation. Through a targeted but minimal use of double-precision arithmetic we are able to use either half-precision arithmetic or mixed half/single-precision arithmetic for almost all of the calculations in the Legendre transform without affecting forecast skill.

## Posits as an alternative to floats for weather and climate models

CoNGA'19 Proceedings of the Conference for Next Generation Arithmetic 2019 Association for Computing Machinery (2019)

Posit numbers, a recently proposed alternative to floating-point numbers, claim to have smaller arithmetic rounding errors in many applications. By studying weather and climate models of low and medium complexity (the Lorenz system and a shallow water model) we present benefits of posits compared to floats at 16 bit. As a standardised posit processor does not exist yet, we emulate posit arithmetic on a conventional CPU. Using a shallow water model, forecasts based on 16-bit posits with 1 or 2 exponent bits are clearly more accurate than half precision floats. We therefore propose 16 bit with 2 exponent bits as a standard posit format, as its wide dynamic range of 32 orders of magnitude provides a great potential for many weather and climate models. Although the focus is on geophysical fluid simulations, the results are also meaningful and promising for reduced precision posit arithmetic in the wider field of computational fluid dynamics.

## Signal and noise in regime systems: A hypothesis on the predictability of the North Atlantic Oscillation

Quarterly Journal of the Royal Meteorological Society (2019)

© 2018 Royal Meteorological Society Studies conducted by the UK Met Office reported significant skill in predicting the winter North Atlantic Oscillation (NAO) index with their seasonal prediction system. At the same time, a very low signal-to-noise ratio was observed, as measured using the “ratio of predictable components” (RPC) metric. We analyse both the skill and signal-to-noise ratio using a new statistical toy model, which assumes NAO predictability is driven by regime dynamics. It is shown that if the system is approximately bimodal in nature, with the model consistently underestimating the level of regime persistence each season, then both the high skill and high RPC value of the Met Office hindcasts can easily be reproduced. Underestimation of regime persistence could be attributable to any number of sources of model error, including imperfect regime structure or errors in the propagation of teleconnections. In particular, a high RPC value for a seasonal mean prediction may be expected even if the model's internal level of noise is realistic.

## Experimental Non-Violation of the Bell Inequality

ENTROPY **20** (2019) ARTN 356