Show simple item record

dc.contributor.advisorEinstein, Herbert H.
dc.contributor.authorPietersen, Randall
dc.date.accessioned2025-08-21T17:00:15Z
dc.date.available2025-08-21T17:00:15Z
dc.date.issued2025-05
dc.date.submitted2025-06-19T19:09:37.922Z
dc.identifier.urihttps://hdl.handle.net/1721.1/162411
dc.description.abstractIf an airfield being operated by the U.S. Air Force is attacked, the current method for assessing its condition is a slow visual and manual inspection process, exposing personnel to dangerous conditions and delaying repair operations. Developing a fully autonomous remote assessment solution would improve the speed and safety of this critical task, but remains an unsolved problem despite continued advances in drone technology, deep learning, and computer vision. This research explores using near-surface hyperspectral sensors as an alternative to red, green, blue (RGB) digital cameras, in hopes of improving detection precision and accuracy for airfield assessment. However, even with modern hyperspectral sensors the benefit of increasing spectral image resolution comes at a cost, creating addition complexity, uncertainty, and sensitivity in the acquisition, data correction, and downstream detection processes. This work presents a series of tests, each designed to better understand and refine a full hyperspectral image detection sequence, starting with sensor selection and raw data acquisition, proceeding to radiometric correction, and culminating in image recognition by means of supervised deep learning (DL). Regarding sensor selection and data acquisition, these findings indicate that for many applications of computer vision, using a hyperspectral camera with high spectral resolution is unnecessary. It is more beneficial to select a camera with snapshot imaging that instead maximizes spectral range or spatial resolution. Radiometric correction is then explored, and experiments demonstrate that correction makes machine learning classification models less sensitive to changes in scene illumination, thus improving overall image recognition performance. Finally, deep learning models for image recognition are tested and a new method for generating synthetic hyperspectral data is developed and shown to be useful for estimating hyperspectral model performance on larger datasets, when real data are limited. Overall, the findings presented in this thesis suggest that by refining the methods used for data acquisition, correction, and detection, hyperspectral imaging improves image recognition when compared to traditional RGB cameras. This applies not only for airfield damage assessment but extends to other real-world applications requiring computer vision and scene understanding.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleHyperspectral Remote Sensing for UXO Detection and Damage Assessment on Airfield Pavements
dc.typeThesis
dc.description.degreePh.D.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Civil and Environmental Engineering
dc.identifier.orcid0000-0002-0374-8108
mit.thesis.degreeDoctoral
thesis.degree.nameDoctor of Philosophy


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record