| dc.contributor.author | Ma, Fangchang | |
| dc.contributor.author | Ayaz, Ulas | |
| dc.contributor.author | Karaman, Sertac | |
| dc.date.accessioned | 2020-05-14T18:04:23Z | |
| dc.date.available | 2020-05-14T18:04:23Z | |
| dc.date.issued | 2019-07 | |
| dc.date.submitted | 2018-12 | |
| dc.identifier.issn | 9781510884472 | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/125238 | |
| dc.description.abstract | The problem of inverting generative neural networks (i.e., to recover the input latent code given partial network output), motivated by image inpainting, has recently been studied by a prior work that focused on fully-connected networks. In this work, we present new theoretical results on convolutional networks, which are more widely used in practice. The network inversion problem is highly non-convex, and hence is typically computationally intractable and without optimality guarantees. However, we rigorously prove that, for a 2-layer convolutional generative network with ReLU and Gaussian-distributed random weights, the input latent code can be deduced from the network output efficiently using simple gradient descent. This new theoretical finding implies that the mapping from the low-dimensional latent space to the high-dimensional image space is one-to-one, under our assumptions. In addition, the same conclusion holds even when the network output is only partially observed (i.e., with missing pixels). We further demonstrate, empirically, that the same conclusion extends to networks with multiple layers, other activation functions (leaky ReLU, sigmoid and tanh), and weights trained on real datasets. | en_US |
| dc.language.iso | en | |
| dc.publisher | Neural Information Processing Systems Foundation, Inc. | en_US |
| dc.relation.isversionof | https://papers.nips.cc/paper/8171-invertibility-of-convolutional-generative-networks-from-partial-measurements | en_US |
| dc.rights | Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. | en_US |
| dc.source | Neural Information Processing Systems (NIPS) | en_US |
| dc.title | Invertibility of convolutional generative networks from partial measurements | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Ma, Fangchang et al. "Invertibility of Convolutional Generative Networks from Partial Measurements." Advances in Neural Information Processing Systems 31 (NIPS 2018), 3-8 December, 2018, Montreal, Canada, NIPS, 2018. © 2018 Curran Associates Inc. | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Aeronautics and Astronautics | en_US |
| dc.relation.journal | Advances in Neural Information Processing Systems 31 (NIPS 2018) | en_US |
| dc.eprint.version | Final published version | en_US |
| dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
| eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
| dc.date.updated | 2019-10-29T13:15:43Z | |
| dspace.date.submission | 2019-10-29T13:15:47Z | |
| mit.journal.volume | 31 | en_US |
| mit.metadata.status | Complete | |