Sequence to better sequence: Continuous revision of combinatorial structures
Author(s)
Jaakkola, Tommi; Gifford, David; Mueller, Jonas
DownloadAccepted version (398.7Kb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
© 2017 by the author(s). We present a model that, after learning on observations of (sequence, outcome) pairs, can be efficiently used to revise a new sequence in order to improve its associated outcome. Our framework requires neither example improvements, nor additional evaluation of outcomes for proposed revisions. To avoid combinatorial-search over sequence elements, we specify a generative model with continuous latent factors, which is learned via joint approximate inference using a recurrent variational autoencoder (VAE) and an outcome-predicting neural network module. Under this model, gradient methods can be used to efficiently optimize the continuous latent factors with respect to inferred outcomes. By appropriately constraining this optimization and using the VAE decoder to generate a revised sequence, we ensure the revision is fundamentally similar to the original sequence, is associated with better outcomes, and looks natural. These desiderata are proven to hold with high probability under our approach, which is empirically demonstrated for revising natural language sentences.
Date issued
2017Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence LaboratoryCitation
Jaakkola, Tommi, Gifford, David and Mueller, Jonas. 2017. "Sequence to better sequence: Continuous revision of combinatorial structures."
Version: Author's final manuscript