Controlling Neural Language Generation
Author(s)
Shen, Tianxiao
DownloadThesis PDF (3.521Mb)
Advisor
Jaakkola, Tommi
Barzilay, Regina
Terms of use
Metadata
Show full item recordAbstract
Large-scale neural language models have made impressive strides in natural language generation. However, typical models operate in a left-to-right, unconstrained fashion with limited control over what is generated. This thesis explores flexible sequence models and weakly supervised methods to perform various controlled generation tasks. We anticipate that these techniques will be broadly applicable to other domains, such as the generation of images, molecules, and biological sequences.
We begin by presenting a class of sequence models called blank language models (BLMs), which generate sequences by dynamically creating and filling in blanks. Given partially specified text with one or more blanks, BLM will fill in the blanks with a variable number of tokens consistent with the context. Our model is well suited for a variety of text editing and rewriting tasks and demonstrates effectiveness on text infilling, ancient text restoration, and sentiment transfer.
Next, we investigate text autoencoders and their use to control generation through latent space operations. We establish a theory on how to mold a meaningful latent space geometry for discrete text data. Building on this, we develop a family of denoising text autoencoders that demonstrate the potential of attribute modification (e.g., tense, sentiment, etc.) through simple vector arithmetic.
The final two chapters address language style transfer in the absence of supervised data. We first formalize the task of non-parallel style transfer and discuss the feasibility of the learning problem. We propose a method that leverages distributional alignment of latent representations to perform style transfer. Then, we study confounding factors and show that by dividing the data into two groups of different styles, with the sets in each group illustrating variations we do not wish to alter, we can exploit invariance to isolate confounders and transfer text in the desired direction.
Date issued
2022-05Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology