dc.description.abstract | If an airfield being operated by the U.S. Air Force is attacked, the current method for assessing its condition is a slow visual and manual inspection process, exposing personnel to dangerous conditions and delaying repair operations. Developing a fully autonomous remote assessment solution would improve the speed and safety of this critical task, but remains an unsolved problem despite continued advances in drone technology, deep learning, and computer vision. This research explores using near-surface hyperspectral sensors as an alternative to red, green, blue (RGB) digital cameras, in hopes of improving detection precision and accuracy for airfield assessment. However, even with modern hyperspectral sensors the benefit of increasing spectral image resolution comes at a cost, creating addition complexity, uncertainty, and sensitivity in the acquisition, data correction, and downstream detection processes.
This work presents a series of tests, each designed to better understand and refine a full hyperspectral image detection sequence, starting with sensor selection and raw data acquisition, proceeding to radiometric correction, and culminating in image recognition by means of supervised deep learning (DL). Regarding sensor selection and data acquisition, these findings indicate that for many applications of computer vision, using a hyperspectral camera with high spectral resolution is unnecessary. It is more beneficial to select a camera with snapshot imaging that instead maximizes spectral range or spatial resolution. Radiometric correction is then explored, and experiments demonstrate that correction makes machine learning classification models less sensitive to changes in scene illumination, thus improving overall image recognition performance. Finally, deep learning models for image recognition are tested and a new method for generating synthetic hyperspectral data is developed and shown to be useful for estimating hyperspectral model performance on larger datasets, when real data are limited. Overall, the findings presented in this thesis suggest that by refining the methods used for data acquisition, correction, and detection, hyperspectral imaging improves image recognition when compared to traditional RGB cameras. This applies not only for airfield damage assessment but extends to other real-world applications requiring computer vision and scene understanding. | |