dc.contributor.author | Zhao, Amy (Xiaoyu Amy) | |
dc.contributor.author | Balakrishnan, Guha | |
dc.contributor.author | Durand, Fredo | |
dc.contributor.author | Guttag, John V | |
dc.contributor.author | Dalca, Adrian Vasile | |
dc.date.accessioned | 2021-02-23T20:37:04Z | |
dc.date.available | 2021-02-23T20:37:04Z | |
dc.date.issued | 2019-06 | |
dc.date.submitted | 2019-04 | |
dc.identifier.isbn | 9781728132938 | |
dc.identifier.issn | 1063-6919 | |
dc.identifier.uri | https://hdl.handle.net/1721.1/129978 | |
dc.description.abstract | Image segmentation is an important task in many medical applications. Methods based on convolutional neural networks attain state-of-the-art accuracy; however, they typically rely on supervised training with large labeled datasets. Labeling medical images requires significant expertise and time, and typical hand-tuned approaches for data augmentation fail to capture the complex variations in such images. We present an automated data augmentation method for synthesizing labeled medical images. We demonstrate our method on the task of segmenting magnetic resonance imaging (MRI) brain scans. Our method requires only a single segmented scan, and leverages other unlabeled scans in a semi-supervised approach. We learn a model of transformations from the images, and use the model along with the labeled example to synthesize additional labeled examples. Each transformation is comprised of a spatial deformation field and an intensity change, enabling the synthesis of complex effects such as variations in anatomy and image acquisition procedures. We show that training a supervised segmenter with these new examples provides significant improvements over state-of-the-art methods for one-shot biomedical image segmentation. | en_US |
dc.language.iso | en | |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | en_US |
dc.relation.isversionof | 10.1109/CVPR.2019.00874 | en_US |
dc.rights | Creative Commons Attribution-Noncommercial-Share Alike | en_US |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/4.0/ | en_US |
dc.source | arXiv | en_US |
dc.title | Data Augmentation Using Learned Transformations for One-Shot Medical Image Segmentation | en_US |
dc.type | Article | en_US |
dc.identifier.citation | Zhao, Amy et al. “Data Augmentation Using Learned Transformations for One-Shot Medical Image Segmentation.” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019, Long Beach, California, Institute of Electrical and Electronics Engineers (IEEE), June 2019. © 2019 The Author(s) | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
dc.relation.journal | 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) | en_US |
dc.eprint.version | Original manuscript | en_US |
dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
dc.date.updated | 2020-12-11T17:15:44Z | |
dspace.orderedauthors | Zhao, A; Balakrishnan, G; Durand, F; Guttag, JV; Dalca, AV | en_US |
dspace.date.submission | 2020-12-11T17:15:48Z | |
mit.journal.volume | 2019-June | en_US |
mit.license | OPEN_ACCESS_POLICY | |
mit.metadata.status | Complete | |