| dc.contributor.author | Husvogt, Lennart |  | 
| dc.contributor.author | Fujimoto, James G |  | 
| dc.date.accessioned | 2021-01-25T15:53:28Z |  | 
| dc.date.available | 2021-01-25T15:53:28Z |  | 
| dc.date.issued | 2020-02 |  | 
| dc.date.submitted | 2019-11 |  | 
| dc.identifier.isbn | 9783658292676 |  | 
| dc.identifier.issn | 2628-8958 |  | 
| dc.identifier.uri | https://hdl.handle.net/1721.1/129537 |  | 
| dc.description.abstract | Fundus photography and Optical Coherence Tomography Angiography (OCT-A) are two commonly used modalities in ophthalmic imaging. With the development of deep learning algorithms, fundus image processing, especially retinal vessel segmentation, has been extensively studied. Built upon the known operator theory, interpretable deep network pipelines with well-defined modules have been constructed on fundus images. In this work, we firstly train a modularized network pipeline for the task of retinal vessel segmentation on the fundus database DRIVE. The pretrained preprocessing module from the pipeline is then directly transferred onto OCT-A data for image quality enhancement without further fine-tuning. Output images show that the preprocessing net can balance the contrast, suppress noise and thereby produce vessel trees with improved connectivity in both image modalities. The visual impression is confirmed by an observer study with five OCT-A experts. Statistics of the grades by the experts indicate that the transferred module improves both the image quality and the diagnostic quality. Our work provides an example that modules within network pipelines that are built upon the known operator theory facilitate cross-modality reuse without additional training or transfer learning. | en_US | 
| dc.description.sponsorship | European Union. Horizon 2020 Research and Innovation Programme (Grant 810316) | en_US | 
| dc.language.iso | en |  | 
| dc.publisher | Springer Fachmedien Wiesbaden | en_US | 
| dc.relation.isversionof | 10.1007/978-3-658-29267-6_61 | en_US | 
| dc.rights | Creative Commons Attribution-Noncommercial-Share Alike | en_US | 
| dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/4.0/ | en_US | 
| dc.source | arXiv | en_US | 
| dc.title | Modularization of deep networks allows cross-modality reuse: lesson learnt | en_US | 
| dc.type | Article | en_US | 
| dc.identifier.citation | Wu, Weilin et al. “Modularization of deep networks allows cross-modality reuse: lesson learnt.” Informatik aktuell (February 2020): 274-279 © 2020 The Author(s) | en_US | 
| dc.contributor.department | Massachusetts Institute of Technology. Research Laboratory of Electronics | en_US | 
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US | 
| dc.relation.journal | Informatik aktuell | en_US | 
| dc.eprint.version | Original manuscript | en_US | 
| dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US | 
| eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US | 
| dc.date.updated | 2020-12-15T13:49:15Z |  | 
| dspace.orderedauthors | Fu, W; Husvogt, L; Ploner, S; Fujimoto, JG; Maier, A | en_US | 
| dspace.date.submission | 2020-12-15T13:49:21Z |  | 
| mit.license | OPEN_ACCESS_POLICY |  | 
| mit.metadata.status | Complete |  |