Show simple item record

dc.contributor.authorShenoy, Jayanth
dc.contributor.authorZhang, Xingjian Davis
dc.contributor.authorTao, Bill
dc.contributor.authorMehrotra, Shlok
dc.contributor.authorYang, Rem
dc.contributor.authorZhao, Han
dc.contributor.authorVasisht, Deepak
dc.date.accessioned2024-10-15T16:34:38Z
dc.date.available2024-10-15T16:34:38Z
dc.date.issued2024-09-19
dc.identifier.urihttps://hdl.handle.net/1721.1/157312
dc.description.abstractSatellite image time series (SITS) segmentation is crucial for many applications, like environmental monitoring, land cover mapping, and agricultural crop type classification. However, training models for SITS segmentation remains a challenging task due to the lack of abundant training data, which requires fine-grained annotation. We propose S4, a new self-supervised pretraining approach that significantly reduces the requirement for labeled training data by utilizing two key insights of satellite imagery: (a) Satellites capture images in different parts of the spectrum, such as radio frequencies and visible frequencies. (b) Satellite imagery is geo-registered, allowing for fine-grained spatial alignment. We use these insights to formulate pretraining tasks in S4. To the best of our knowledge, S4 is the <i><b>first</b></i> multimodal and temporal approach for SITS segmentation. S4&rsquo;s novelty stems from leveraging multiple properties required for SITS self-supervision: (1) multiple modalities, (2) temporal information, and (3) pixel-level feature extraction. We also curate m2s2-SITS, a large-scale dataset of unlabeled, spatially aligned, multimodal, and geographic-specific SITS that serves as representative pretraining data for S4. Finally, we evaluate S4 on multiple SITS segmentation datasets and demonstrate its efficacy against competing baselines while using limited labeled data. Through a series of extensive comparisons and ablation studies, we demonstrate S4&rsquo;s ability as an effective feature extractor for downstream semantic segmentation.en_US
dc.publisherMultidisciplinary Digital Publishing Instituteen_US
dc.relation.isversionof10.3390/rs16183470en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceMultidisciplinary Digital Publishing Instituteen_US
dc.titleSelf-Supervised Learning across the Spectrumen_US
dc.typeArticleen_US
dc.identifier.citationShenoy, J.; Zhang, X.D.; Tao, B.; Mehrotra, S.; Yang, R.; Zhao, H.; Vasisht, D. Self-Supervised Learning across the Spectrum. Remote Sens. 2024, 16, 3470.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.relation.journalremote sensingen_US
dc.identifier.mitlicensePUBLISHER_CC
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2024-09-27T13:18:26Z
dspace.date.submission2024-09-27T13:18:26Z
mit.journal.volume16en_US
mit.journal.issue18en_US
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record