Publications by Sam Hatfield


Accelerating high-resolution weather models with deep-learning hardware

PASC '19 Proceedings of the Platform for Advanced Scientific Computing Conference Association for Computing Machinery (2019)

S Hatfield, M Chantry, P Duben, T Palmer

The next generation of weather and climate models will have an unprecedented level of resolution and model complexity, and running these models efficiently will require taking advantage of future supercomputers and heterogeneous hardware. In this paper, we investigate the use of mixed-precision hardware that supports floating-point operations at double-, single- and half-precision. In particular, we investigate the potential use of the NVIDIA Tensor Core, a mixed-precision matrix-matrix multiplier mainly developed for use in deep learning, to accelerate the calculation of the Legendre transforms in the Integrated Forecasting System (IFS), one of the leading global weather forecast models. In the IFS, the Legendre transform is one of the most expensive model components and dominates the computational cost for simulations at a very high resolution. We investigate the impact of mixed-precision arithmetic in IFS simulations of operational complexity through software emulation. Through a targeted but minimal use of double-precision arithmetic we are able to use either half-precision arithmetic or mixed half/single-precision arithmetic for almost all of the calculations in the Legendre transform without affecting forecast skill.


Choosing the optimal numerical precision for data assimilation in the presence of model error

Journal of Advances in Modeling Earth Systems American Geophysical Union 10 (2018) 2177-2191

S Hatfield, P Düben, M Chantry, K Kondo, T Miyoshi, T Palmer

The use of reduced numerical precision within an atmospheric data assimilation system is investigated. An atmospheric model with a spectral dynamical core is used to generate synthetic observations, which are then assimilated back into the same model using an ensemble Kalman filter. The effect on the analysis error of reducing precision from 64 bits to only 22 bits is measured and found to depend strongly on the degree of model uncertainty within the system. When the model used to generate the observations is identical to the model used to assimilate observations, the reduced‐precision results suffer substantially. However, when model error is introduced by changing the diffusion scheme in the assimilation model or by using a higher‐resolution model to generate observations, the difference in analysis quality between the two levels of precision is almost eliminated. Lower‐precision arithmetic has a lower computational cost, so lowering precision could free up computational resources in operational data assimilation and allow an increase in ensemble size or grid resolution.


Improving weather forecast skill through reduced precision data assimilation

Monthly Weather Review American Meteorological Society 146 (2017) 49-62

S Hatfield, AC Subramanian, TN Palmer, PD Düben

A new approach for improving the accuracy of data assimilation, by trading numerical precision for ensemble size, is introduced. Data assimilation is inherently uncertain due to the use of noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower precision arithmetic is cheaper, and so by reducing precision in ensemble data assimilation, computational resources can be redistributed towards, for example, a larger ensemble size. Because larger ensembles provide a better estimate of the underlying distribution and are less reliant on covariance inflation and localization, lowering precision could actually permit an improvement in the accuracy of weather forecasts. Here, this idea is tested on an ensemble data assimilation system comprising the Lorenz ’96 toy atmospheric model and the ensemble square root filter. The system is run at double, single and half precision (the latter using an emulation tool), and the performance of each precision is measured through mean error statistics and rank histograms. The sensitivity of these results to the observation error and the length of the observation window are addressed. Then, by reinvesting the saved computational resources from reducing precision into the ensemble size, assimilation error can be reduced for (hypothetically) no extra cost. This results in increased forecasting skill, with respect to double precision assimilation.