Publications by Tim Palmer
Journal of Climate 30 (2017) 17-38
© 2017 American Meteorological Society.El Niño-Southern Oscillation (ENSO) is the dominant mode of interannual variability in the tropical Pacific. However, the models in the ensemble from phase 5 of the Coupled Model Intercomparison Project (CMIP5) have large deficiencies in ENSO amplitude, spatial structure, and temporal variability. The use of stochastic parameterizations as a technique to address these pervasive errors is considered. The multiplicative stochastically perturbed parameterization tendencies (SPPT) scheme is included in coupled integrations of the National Center for Atmospheric Research (NCAR) Community Atmosphere Model, version 4 (CAM4). The SPPT scheme results in a significant improvement to the representation of ENSO in CAM4, improving the power spectrum and reducing the magnitude of ENSO toward that observed. To understand the observed impact, additive and multiplicative noise in a simple delayed oscillator (DO) model of ENSO is considered. Additive noise results in an increase in ENSO amplitude, but multiplicative noise can reduce the magnitude of ENSO, as was observed for SPPT in CAM4. In light of these results, two complementary mechanisms are proposed by which the improvement occurs in CAM. Comparison of the coupled runs with a set of atmosphere-only runs indicates that SPPT first improve the variability in the zonal winds through perturbing the convective heating tendencies, which improves the variability of ENSO. In addition, SPPT improve the distribution of westerly wind bursts (WWBs), important for initiation of El Niño events, by increasing the stochastic component of WWB and reducing the overly strong dependency on SST compared to the control integration.
Atmospheric seasonal forecasts of the twentieth century: multi-decadal variability in predictive skill of the winter North Atlantic Oscillation (NAO) and their potential value for extreme event attribution
Quarterly Journal of the Royal Meteorological Society 143 (2017) 917-926
© 2016 The Authors. Quarterly Journal of the Royal Meteorological Society published by John Wiley & Sons Ltd on behalf of the Royal Meteorological Society.Based on skill estimates from hindcasts made over the last couple of decades, recent studies have suggested that considerable success has been achieved in forecasting winter climate anomalies over the Euro-Atlantic area using current-generation dynamical forecast models. However, previous-generation models had shown that forecasts of winter climate anomalies in the 1960s and 1970s were less successful than forecasts of the 1980s and 1990s. Given that the more recent decades have been dominated by the North Atlantic Oscillation (NAO) in its positive phase, it is important to know whether the performance of current models would be similarly skilful when tested over periods of a predominantly negative NAO. To this end, a new ensemble of atmospheric seasonal hindcasts covering the period 1900–2009 has been created, providing a unique tool to explore many aspects of atmospheric seasonal climate prediction. In this study we focus on two of these: multi-decadal variability in predicting the winter NAO, and the potential value of the long seasonal hindcast datasets for the emerging science of probabilistic event attribution. The existence of relatively low skill levels during the period 1950s–1970s has been confirmed in the new dataset. The skill of the NAO forecasts is larger, however, in earlier and later periods. Whilst these inter-decadal differences in skill are, by themselves, only marginally statistically significant, the variations in skill strongly co-vary with statistics of the general circulation itself suggesting that such differences are indeed physically based. The mid-century period of low forecast skill coincides with a negative NAO phase but the relationship between the NAO phase/amplitude and forecast skill is more complex than linear. Finally, we show how seasonal forecast reliability can be of importance for increasing confidence in statements of causes of extreme weather and climate events, including effects of anthropogenic climate change.
Quarterly Journal of the Royal Meteorological Society 143 (2017) 897-908
© 2016 The Authors. Quarterly Journal of the Royal Meteorological Society published by John Wiley & Sons Ltd on behalf of the Royal Meteorological Society.Increasing the resolution of numerical models has played a large part in improving the accuracy of weather and climate forecasts in recent years. Until now, this has required the use of ever more powerful computers, the energy costs of which are becoming increasingly problematic. It has therefore been proposed that forecasters switch to using more efficient ‘reduced precision’ hardware capable of sacrificing unnecessary numerical precision to save costs. Here, an extended form of the Lorenz ‘96 idealized model atmosphere is used to test whether more accurate forecasts could be produced by lowering numerical precision more at smaller spatial scales in order to increase the model resolution. Both a scale-dependent mixture of single- and half-precision – where numbers are represented with fewer bits of information on smaller spatial scales – and ‘stochastic processors’ – where random ‘bit-flips’ are allowed for small-scale variables – are emulated on conventional hardware. It is found that high-resolution parametrized models with scale-selective reduced precision yield better short-term and climatological forecasts than lower resolution parametrized models with conventional precision for a relatively small increase in computational cost. This suggests that a similar approach in real-world models could lead to more accurate and efficient weather and climate forecasts.
Monthly Weather Review 145 (2017) 495-502
© 2017 American Meteorological Society.Earth's climate is a nonlinear dynamical system with scale-dependent Lyapunov exponents. As such, an important theoretical question for modeling weather and climate is how much real information is carried in a model's physical variables as a function of scale and variable type. Answering this question is of crucial practical importance given that the development of weather and climate models is strongly constrained by available supercomputer power. As a starting point for answering this question, the impact of limiting almost all real-number variables in the forecasting mode of ECMWF Integrated Forecast System (IFS) from 64 to 32 bits is investigated. Results for annual integrations and medium-range ensemble forecasts indicate no noticeable reduction in accuracy, and an average gain in computational efficiency by approximately 40%. This study provides the motivation for more scale-selective reductions in numerical precision.
Climate Dynamics (2016) 1-13
© 2016 The Author(s)Predictability of Atlantic Ocean sea surface temperatures (SST) on seasonal and decadal timescales is investigated using a suite of statistical linear inverse models (LIM). Observed monthly SST anomalies in the Atlantic sector (between 22(Formula presented.)S and 66(Formula presented.)N) are used to construct the LIMs for seasonal and decadal prediction. The forecast skills of the LIMs are then compared to that from two current operational forecast systems. Results indicate that the LIM has good forecast skill for time periods of 3–4 months on the seasonal timescale with enhanced predictability in the spring season. On decadal timescales, the impact of inter-annual and intra-annual variability on the predictability is also investigated. The results show that the suite of LIMs have forecast skill for about 3–4 years over most of the domain when we use only the decadal variability for the construction of the LIM. Including higher frequency variability helps improve the forecast skill and maintains the correlation of LIM predictions with the observed SST anomalies for longer periods. These results indicate the importance of temporal scale interactions in improving predictability on decadal timescales. Hence, LIMs can not only be used as benchmarks for estimates of statistical skill but also to isolate contributions to the forecast skills from different timescales, spatial scales or even model components.
Proceedings. Mathematical, physical, and engineering sciences 472 (2016) 20150772-
Given their increasing relevance for society, I suggest that the climate science community itself does not treat the development of error-free ab initio models of the climate system with sufficient urgency. With increasing levels of difficulty, I discuss a number of proposals for speeding up such development. Firstly, I believe that climate science should make better use of the pool of post-PhD talent in mathematics and physics, for developing next-generation climate models. Secondly, I believe there is more scope for the development of modelling systems which link weather and climate prediction more seamlessly. Finally, here in Europe, I call for a new European Programme on Extreme Computing and Climate to advance our ability to simulate climate extremes, and understand the drivers of such extremes. A key goal for such a programme is the development of a 1 km global climate system model to run on the first exascale supercomputers in the early 2020s.
Monthly Weather Review 144 (2016) 1867-1875
© 2016 American Meteorological Society.Stochastic parameterization provides a methodology for representing model uncertainty in ensemble forecasts. Here the impact on forecast reliability over seasonal time scales of three existing stochastic parameterizations in the ocean component of a coupled model is studied. The relative impacts of these schemes upon the ocean mean state and ensemble spread are analyzed. The oceanic variability induced by the atmospheric forcing of the coupled system is, in most regions, the major source of ensemble spread. The largest impact on spread and bias came from the stochastically perturbed parameterization tendency (SPPT) scheme, which has proven particularly effective in the atmosphere. The key regions affected are eddy-active regions, namely, the western boundary currents and the Southern Ocean where ensemble spread is increased. However, unlike its impact in the atmosphere, SPPT in the ocean did not result in a significant decrease in forecast error on seasonal time scales. While there are good grounds for implementing stochastic schemes in ocean models, the results suggest that they will have to be more sophisticated. Some suggestions for next-generation stochastic schemes are made.
JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES 121 (2016) 1698-1714
Assessing the role of insulin-like growth factors and binding proteins in prostate cancer using Mendelian randomization: Genetic variants as instruments for circulating levels.
International journal of cancer 139 (2016) 1520-1533
Circulating insulin-like growth factors (IGFs) and their binding proteins (IGFBPs) are associated with prostate cancer. Using genetic variants as instruments for IGF peptides, we investigated whether these associations are likely to be causal. We identified from the literature 56 single nucleotide polymorphisms (SNPs) in the IGF axis previously associated with biomarker levels (8 from a genome-wide association study [GWAS] and 48 in reported candidate genes). In ∼700 men without prostate cancer and two replication cohorts (N ∼ 900 and ∼9,000), we examined the properties of these SNPS as instrumental variables (IVs) for IGF-I, IGF-II, IGFBP-2 and IGFBP-3. Those confirmed as strong IVs were tested for association with prostate cancer risk, low (< 7) vs. high (≥ 7) Gleason grade, localised vs. advanced stage, and mortality, in 22,936 controls and 22,992 cases. IV analysis was used in an attempt to estimate the causal effect of circulating IGF peptides on prostate cancer. Published SNPs in the IGFBP1/IGFBP3 gene region, particularly rs11977526, were strong instruments for IGF-II and IGFBP-3, less so for IGF-I. Rs11977526 was associated with high (vs. low) Gleason grade (OR per IGF-II/IGFBP-3 level-raising allele 1.05; 95% CI: 1.00, 1.10). Using rs11977526 as an IV we estimated the causal effect of a one SD increase in IGF-II (∼265 ng/mL) on risk of high vs. low grade disease as 1.14 (95% CI: 1.00, 1.31). Because of the potential for pleiotropy of the genetic instruments, these findings can only causally implicate the IGF pathway in general, not any one specific biomarker.
Simulating weather regimes: impact of stochastic and perturbed parameter schemes in a simple atmospheric model
CLIMATE DYNAMICS 44 (2015) 2195-2214
Evaluation of ensemble forecast uncertainty using a new proper score: Application to medium-range and seasonal forecasts
QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY 141 (2015) 538-549
Invariant set theory: Violating measurement independence without fine tuning, conspiracy, constraints on free will or retrocausality
Electronic Proceedings in Theoretical Computer Science, EPTCS 195 (2015) 285-294
© 2015 T. N. Palmer.Invariant Set (IS) theory is a locally causal ontic theory of physics based on the Cosmological Invariant Set postulate that the universe U can be considered a deterministic dynamical system evolving precisely on a (suitably constructed) fractal dynamically invariant set in U's state space. IS theory violates the Bell inequalities by violating Measurement Independence. Despite this, IS theory is not fine tuned, is not conspiratorial, does not constrain experimenter free will and does not invoke retrocausality. The reasons behind these claims are discussed in this paper. These arise fromproperties not found in conventional ontic models: the invariant set has zero measure in its Euclidean embedding space, has Cantor Set structure homeomorphic to the p-adic integers (p⋙0) and is non-computable. In particular, it is shown that the p-adic metric encapulates the physics of the Cosmological Invariant Set postulate, and provides the technical means to demonstrate no fine tuning or conspiracy. Quantum theory can be viewed as the singular limit of IS theory when when p is set equal to infinity. Since it is based around a top-down constraint from cosmology, IS theory suggests that gravitational and quantum physics will be unified by a gravitational theory of the quantum, rather than a quantum theory of gravity. Some implications arising from such a perspective are discussed.
Journal of Advances in Modeling Earth Systems 7 (2015) 1393-1408
Programmable hardware, in particular Field Programmable Gate Arrays (FPGAs), promises a significant increase in computational performance for simulations in geophysical fluid dynamics compared with CPUs of similar power consumption. FPGAs allow adjusting the representation of floating-point numbers to specific application needs. We analyze the performance-precision trade-off on FPGA hardware for the two-scale Lorenz '95 model. We scale the size of this toy model to that of a high-performance computing application in order to make meaningful performance tests. We identify the minimal level of precision at which changes in model results are not significant compared with a maximal precision version of the model and find that this level is very similar for cases where the model is integrated for very short or long intervals. It is therefore a useful approach to investigate model errors due to rounding errors for very short simulations (e.g., 50 time steps) to obtain a range for the level of precision that can be used in expensive long-term simulations. We also show that an approach to reduce precision with increasing forecast time, when model errors are already accumulated, is very promising. We show that a speed-up of 1.9 times is possible in comparison to FPGA simulations in single precision if precision is reduced with no strong change in model error. The single-precision FPGA setup shows a speed-up of 2.8 times in comparison to our model implementation on two 6-core CPUs for large model setups.
Climate Dynamics 44 (2015) 2177-2193
© 2014, Springer-Verlag Berlin Heidelberg.The simulation of quasi-persistent regime structures in an atmospheric model with horizontal resolution typical of the Intergovernmental Panel on Climate Change fifth assessment report simulations, is shown to be unrealistic. A higher resolution configuration of the same model, with horizontal resolution typical of that used in operational numerical weather prediction, is able to simulate these regime structures realistically. The spatial patterns of the simulated regimes are remarkably accurate at high resolution. A model configuration at intermediate resolution shows a marked improvement over the low-resolution configuration, particularly in terms of the temporal characteristics of the regimes, but does not produce a simulation as accurate as the very-high-resolution configuration. It is demonstrated that the simulation of regimes can be significantly improved, even at low resolution, by the introduction of a stochastic physics scheme. At low resolution the stochastic physics scheme drastically improves both the spatial and temporal aspects of the regimes simulation. These results highlight the importance of small-scale processes on large-scale climate variability, and indicate that although simulating variability at small scales is a necessity, it may not be necessary to represent the small-scales accurately, or even explicitly, in order to improve the simulation of large-scale climate. It is argued that these results could have important implications for improving both global climate simulations, and the ability of high-resolution limited-area models, forced by low-resolution global models, to reliably simulate regional climate change signals.
Does the ECMWF IFS Convection Parameterization with Stochastic Physics Correctly Reproduce Relationships between Convection and the Large-Scale State?
JOURNAL OF THE ATMOSPHERIC SCIENCES 72 (2015) 236-242
Proceedings - 2015 IEEE 23rd Annual International Symposium on Field-Programmable Custom Computing Machines, FCCM 2015 (2015) 171-178
© 2015 IEEE.The computationally intensive nature of atmospheric modelling is an ideal target for hardware acceleration. Performance of hardware designs can be improved through the use of reduced precision arithmetic, but maintaining appropriate accuracy is essential. We explore reduced precision optimisation for simulating chaotic systems, targeting atmospheric modelling in which even minor changes in arithmetic behaviour can have a significant impact on system behaviour. Hence, standard techniques for comparing numerical accuracy are inappropriate. We use the Hellinger distance to compare statistical behaviour between reduced-precision CPU implementations to guide FPGA designs of a chaotic system, and analyse accuracy, performance and power efficiency of the resulting implementations. Our results show that with only a limited loss in accuracy corresponding to less than 10% uncertainly in input parameters, a single Xilinx Virtex 6 SXT475 FPGA can be 13 times faster and 23 times more power efficient than a 6-core Intel Xeon X5650 processor.
Geophysical research letters 42 (2015) 1554-1559
It has recently been argued that single-model seasonal forecast ensembles are overdispersive, implying that the real world is more predictable than indicated by estimates of so-called perfect model predictability, particularly over the North Atlantic. However, such estimates are based on relatively short forecast data sets comprising just 20 years of seasonal predictions. Here we study longer 40 year seasonal forecast data sets from multimodel seasonal forecast ensemble projects and show that sampling uncertainty due to the length of the hindcast periods is large. The skill of forecasting the North Atlantic Oscillation during winter varies within the 40 year data sets with high levels of skill found for some subperiods. It is demonstrated that while 20 year estimates of seasonal reliability can show evidence of overdispersive behavior, the 40 year estimates are more stable and show no evidence of overdispersion. Instead, the predominant feature on these longer time scales is underdispersion, particularly in the tropics.Predictions can appear overdispersive due to hindcast length sampling errorLonger hindcasts are more robust and underdispersive, especially in the tropicsTwenty hindcasts are an inadequate sample size to assess seasonal forecast skill.
CLIMATE DYNAMICS 44 (2015) 2177-2193
Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing.
Frontiers in computational neuroscience 9 (2015) 124-
How is the brain configured for creativity? What is the computational substrate for 'eureka' moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.