Precision Machine Learning
Author(s)
Michaud, Eric J.; Liu, Ziming; Tegmark, Max
Downloadentropy-25-00175-v3.pdf (3.377Mb)
Publisher with Creative Commons License
Publisher with Creative Commons License
Creative Commons Attribution
Terms of use
Metadata
Show full item recordAbstract
We explore unique considerations involved in fitting machine learning (ML) models to data with very high precision, as is often required for science applications. We empirically compare various function approximation methods and study how they <i>scale</i> with increasing parameters and data. We find that neural networks (NNs) can often outperform classical approximation methods on high-dimensional examples, by (we hypothesize) auto-discovering and exploiting modular structures therein. However, neural networks trained with common optimizers are less powerful for low-dimensional cases, which motivates us to study the unique properties of neural network loss landscapes and the corresponding optimization challenges that arise in the high precision regime. To address the optimization issue in low dimensions, we develop training tricks which enable us to train neural networks to extremely low loss, close to the limits allowed by numerical precision.
Date issued
2023-01-15Department
Massachusetts Institute of Technology. Department of Physics; Center for Brains, Minds, and MachinesPublisher
Multidisciplinary Digital Publishing Institute
Citation
Entropy 25 (1): 175 (2023)
Version: Final published version