Show simple item record

dc.contributor.authorDeng, Mo
dc.contributor.authorGoy, Alexandre Sydney Robert
dc.contributor.authorLi, Shuai
dc.contributor.authorArthur, Kwabena K.
dc.contributor.authorBarbastathis, George
dc.date.accessioned2020-07-22T20:34:18Z
dc.date.available2020-07-22T20:34:18Z
dc.date.issued2020-01
dc.date.submitted2019-12
dc.identifier.issn1094-4087
dc.identifier.urihttps://hdl.handle.net/1721.1/126326
dc.description.abstractDeep neural networks (DNNs) are efficient solvers for ill-posed problems and have been shown to outperform classical optimization techniques in several computational imaging problems. In supervised mode, DNNs are trained by minimizing a measure of the difference between their actual output and their desired output; the choice of measure, referred to as “loss function,” severely impacts performance and generalization ability. In a recent paper [A. Goy et al., Phys. Rev. Lett. 121(24), 243902 (2018)], we showed that DNNs trained with the negative Pearson correlation coefficient (NPCC) as the loss function are particularly fit for photon-starved phase-retrieval problems, though the reconstructions are manifestly deficient at high spatial frequencies. In this paper, we show that reconstructions by DNNs trained with default feature loss (defined at VGG layer ReLU-22) contain more fine details; however, grid-like artifacts appear and are enhanced as photon counts become very low. Two additional key findings related to these artifacts are presented here. First, the frequency signature of the artifacts depends on the VGG’s inner layer that perceptual loss is defined upon, halving with each MaxPooling2D layer deeper in the VGG. Second, VGG ReLU-12 outperforms all other layers as the defining layer for the perceptual loss.en_US
dc.description.sponsorshipIntelligence Advanced Research (Award FA8650-17-C-9113)en_US
dc.language.isoen
dc.publisherThe Optical Societyen_US
dc.relation.isversionofhttp://dx.doi.org/10.1364/oe.381301en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceOSA Publishingen_US
dc.titleProbing shallower: perceptual loss trained Phase Extraction Neural Network (PLT-PhENN) for artifact-free reconstruction at low photon budgeten_US
dc.typeArticleen_US
dc.identifier.citationDeng, Mo et al. "Probing shallower: perceptual loss trained Phase Extraction Neural Network (PLT-PhENN) for artifact-free reconstruction at low photon budget." Optics Express 28, 2 (January 2020): 2511-2535 © 2020 Optical Society of Americaen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Mechanical Engineeringen_US
dc.relation.journalOptics Expressen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2020-06-22T19:04:24Z
dspace.date.submission2020-06-22T19:04:28Z
mit.journal.volume28en_US
mit.journal.issue2en_US
mit.licensePUBLISHER_POLICY
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record