Application of machine learning algorithms to the study of noise artifacts in gravitational-wave data
Author(s)
Biswas, Rahul; Blackburn, Lindy L.; Cao, Junwei; Hodge, K. A.; Katsavounidis, Erotokritos; Kim, Kyungmin; Kim, Young-Min; Le Bigot, Eric-Olivier; Lee, Chang-Hwan; Oh, John J.; Oh, Sang Hoon; Son, Edwin J.; Vaulin, Ruslan; Wang, Xiaoge; Essick, Reed Clasey; Tao, Ye; ... Show more Show less
DownloadBiswas-2013-Application of machine.pdf (2.481Mb)
PUBLISHER_POLICY
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
The sensitivity of searches for astrophysical transients in data from the Laser Interferometer Gravitational-wave Observatory (LIGO) is generally limited by the presence of transient, non-Gaussian noise artifacts, which occur at a high enough rate such that accidental coincidence across multiple detectors is non-negligible. These “glitches” can easily be mistaken for transient gravitational-wave signals, and their robust identification and removal will help any search for astrophysical gravitational waves. We apply machine-learning algorithms (MLAs) to the problem, using data from auxiliary channels within the LIGO detectors that monitor degrees of freedom unaffected by astrophysical signals. Noise sources may produce artifacts in these auxiliary channels as well as the gravitational-wave channel. The number of auxiliary-channel parameters describing these disturbances may also be extremely large; high dimensionality is an area where MLAs are particularly well suited. We demonstrate the feasibility and applicability of three different MLAs: artificial neural networks, support vector machines, and random forests. These classifiers identify and remove a substantial fraction of the glitches present in two different data sets: four weeks of LIGO’s fourth science run and one week of LIGO’s sixth science run. We observe that all three algorithms agree on which events are glitches to within 10% for the sixth-science-run data, and support this by showing that the different optimization criteria used by each classifier generate the same decision surface, based on a likelihood-ratio statistic. Furthermore, we find that all classifiers obtain similar performance to the benchmark algorithm, the ordered veto list, which is optimized to detect pairwise correlations between transients in LIGO auxiliary channels and glitches in the gravitational-wave data. This suggests that most of the useful information currently extracted from the auxiliary channels is already described by this model. Future performance gains are thus likely to involve additional sources of information, rather than improvements in the classification algorithms themselves. We discuss several plausible sources of such new information as well as the ways of propagating it through the classifiers into gravitational-wave searches.
Date issued
2013-09Department
Massachusetts Institute of Technology. Department of Physics; MIT Kavli Institute for Astrophysics and Space Research; LIGO (Observatory : Massachusetts Institute of Technology)Journal
Physical Review D
Publisher
American Physical Society
Citation
Biswas, Rahul, Lindy Blackburn, Junwei Cao, Reed Essick, Kari Alison Hodge, Erotokritos Katsavounidis, Kyungmin Kim, et al. “Application of machine learning algorithms to the study of noise artifacts in gravitational-wave data.” Physical Review D 88, no. 6 (September 2013). © 2013 American Physical Society
Version: Final published version
ISSN
1550-7998
1550-2368