The backbone method for ultra-high dimensional sparse machine learning
Author(s)
Bertsimas, Dimitris; Digalakis, Vassilis
Download10994_2021_6123_ReferencePDF.pdf (1.713Mb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
Abstract
We present the backbone method, a general framework that enables sparse and interpretable supervised machine learning methods to scale to ultra-high dimensional problems. We solve sparse regression problems with
$$10^7$$
10
7
features in minutes and
$$10^8$$
10
8
features in hours, as well as decision tree problems with
$$10^5$$
10
5
features in minutes. The proposed method operates in two phases: we first determine the backbone set, consisting of potentially relevant features, by solving a number of tractable subproblems; then, we solve a reduced problem, considering only the backbone features. For the sparse regression problem, our theoretical analysis shows that, under certain assumptions and with high probability, the backbone set consists of the truly relevant features. Numerical experiments on both synthetic and real-world datasets demonstrate that our method outperforms or competes with state-of-the-art methods in ultra-high dimensional problems, and competes with optimal solutions in problems where exact methods scale, both in terms of recovering the truly relevant features and in its out-of-sample predictive performance.
Date issued
2022-01-22Department
Massachusetts Institute of Technology. Operations Research Center; Sloan School of ManagementPublisher
Springer US
Citation
Bertsimas, Dimitris and Digalakis, Vassilis. 2022. "The backbone method for ultra-high dimensional sparse machine learning."
Version: Author's final manuscript