Show simple item record

dc.contributor.authorHusvogt, Lennart
dc.contributor.authorFujimoto, James G
dc.date.accessioned2021-01-25T15:53:28Z
dc.date.available2021-01-25T15:53:28Z
dc.date.issued2020-02
dc.date.submitted2019-11
dc.identifier.isbn9783658292676
dc.identifier.issn2628-8958
dc.identifier.urihttps://hdl.handle.net/1721.1/129537
dc.description.abstractFundus photography and Optical Coherence Tomography Angiography (OCT-A) are two commonly used modalities in ophthalmic imaging. With the development of deep learning algorithms, fundus image processing, especially retinal vessel segmentation, has been extensively studied. Built upon the known operator theory, interpretable deep network pipelines with well-defined modules have been constructed on fundus images. In this work, we firstly train a modularized network pipeline for the task of retinal vessel segmentation on the fundus database DRIVE. The pretrained preprocessing module from the pipeline is then directly transferred onto OCT-A data for image quality enhancement without further fine-tuning. Output images show that the preprocessing net can balance the contrast, suppress noise and thereby produce vessel trees with improved connectivity in both image modalities. The visual impression is confirmed by an observer study with five OCT-A experts. Statistics of the grades by the experts indicate that the transferred module improves both the image quality and the diagnostic quality. Our work provides an example that modules within network pipelines that are built upon the known operator theory facilitate cross-modality reuse without additional training or transfer learning.en_US
dc.description.sponsorshipEuropean Union. Horizon 2020 Research and Innovation Programme (Grant 810316)en_US
dc.language.isoen
dc.publisherSpringer Fachmedien Wiesbadenen_US
dc.relation.isversionof10.1007/978-3-658-29267-6_61en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleModularization of deep networks allows cross-modality reuse: lesson learnten_US
dc.typeArticleen_US
dc.identifier.citationWu, Weilin et al. “Modularization of deep networks allows cross-modality reuse: lesson learnt.” Informatik aktuell (February 2020): 274-279 © 2020 The Author(s)en_US
dc.contributor.departmentMassachusetts Institute of Technology. Research Laboratory of Electronicsen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.relation.journalInformatik aktuellen_US
dc.eprint.versionOriginal manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2020-12-15T13:49:15Z
dspace.orderedauthorsFu, W; Husvogt, L; Ploner, S; Fujimoto, JG; Maier, Aen_US
dspace.date.submission2020-12-15T13:49:21Z
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record