Greed Is Good If Randomized: New Inference for Dependency Parsing
Author(s)Zhang, Yuan; Lei, Tao; Barzilay, Regina; Jaakkola, Tommi S.
MetadataShow full item record
Dependency parsing with high-order features results in a provably hard decoding problem. A lot of work has gone into developing powerful optimization methods for solving these combinatorial problems. In contrast, we explore, analyze, and demonstrate that a substantially simpler randomized greedy inference algorithm already suffices for near optimal parsing: a) we analytically quantify the number of local optima that the greedy method has to overcome in the context of first-order parsing; b) we show that, as a decoding algorithm, the greedy method surpasses dual decomposition in second-order parsing; c) we empirically demonstrate that our approach with up to third-order and global features outperforms the state-of-the-art dual decomposition and MCMC sampling methods when evaluated on 14 languages of non-projective CoNLL datasets.
DepartmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Proceedings of the 2014 Conference on Empirical Methods on Natural Language Processing
Zhang, Yuan, Tao Lei, Regina Barzilay, and Tommi Jaakkola. "Greed Is Good If Randomized: New Inference for Dependency Parsing." 2014 Conference on Empirical Methods on Natural Language Processing (October 2014).
Author's final manuscript