Show simple item record

dc.contributor.authorMa, Fangchang
dc.contributor.authorAyaz, Ulas
dc.contributor.authorKaraman, Sertac
dc.date.accessioned2020-05-14T18:04:23Z
dc.date.available2020-05-14T18:04:23Z
dc.date.issued2019-07
dc.date.submitted2018-12
dc.identifier.issn9781510884472
dc.identifier.urihttps://hdl.handle.net/1721.1/125238
dc.description.abstractThe problem of inverting generative neural networks (i.e., to recover the input latent code given partial network output), motivated by image inpainting, has recently been studied by a prior work that focused on fully-connected networks. In this work, we present new theoretical results on convolutional networks, which are more widely used in practice. The network inversion problem is highly non-convex, and hence is typically computationally intractable and without optimality guarantees. However, we rigorously prove that, for a 2-layer convolutional generative network with ReLU and Gaussian-distributed random weights, the input latent code can be deduced from the network output efficiently using simple gradient descent. This new theoretical finding implies that the mapping from the low-dimensional latent space to the high-dimensional image space is one-to-one, under our assumptions. In addition, the same conclusion holds even when the network output is only partially observed (i.e., with missing pixels). We further demonstrate, empirically, that the same conclusion extends to networks with multiple layers, other activation functions (leaky ReLU, sigmoid and tanh), and weights trained on real datasets.en_US
dc.language.isoen
dc.publisherNeural Information Processing Systems Foundation, Inc.en_US
dc.relation.isversionofhttps://papers.nips.cc/paper/8171-invertibility-of-convolutional-generative-networks-from-partial-measurementsen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceNeural Information Processing Systems (NIPS)en_US
dc.titleInvertibility of convolutional generative networks from partial measurementsen_US
dc.typeArticleen_US
dc.identifier.citationMa, Fangchang et al. "Invertibility of Convolutional Generative Networks from Partial Measurements." Advances in Neural Information Processing Systems 31 (NIPS 2018), 3-8 December, 2018, Montreal, Canada, NIPS, 2018. © 2018 Curran Associates Inc.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronauticsen_US
dc.relation.journalAdvances in Neural Information Processing Systems 31 (NIPS 2018)en_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2019-10-29T13:15:43Z
dspace.date.submission2019-10-29T13:15:47Z
mit.journal.volume31en_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record