Towards real-time light field processing for quantitative imaging and perception
Author(s)
Bajpayee, Abhishek
DownloadFull printable version (20.28Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Mechanical Engineering.
Advisor
Alexandra H. Techet.
Terms of use
Metadata
Show full item recordAbstract
This thesis aims to make light field imaging based 3D particle image velocimetry (PIV) practically feasible and affordable. In addition, this thesis also extends light field imaging techniques developed with PIV as a target application, for improved perception in robotics. Building upon the basic concepts of light field (LF) imaging which were developed as early as 1996, synthetic aperture (SA) PIV was demonstrated for the purpose of conducting accurate 3D PIV. However, when introduced, SAPIV had multiple limitations such as the need for 9 to 10 cameras as opposed 4 to 5 required by the popular tomographic PIV (Tomo-PIV) technique. In addition, SA reconstruction suffered from low reconstruction quality, slow computation speed and lack of robust and easy to use software. As a result, the adoption of SAPIV as a flow visualization technique was limited. Particle field reconstruction using LF or SA imaging succeeds by being able to systematically eliminate backscatter from illuminated particles in an experimental scene. A densely seeded PIV experiment can have a large amount of backscatter from particles in the volume of interest and a large synthetic aperture setup is capable of effectively seeing through this backscatter and accurately resolve features of interest at specific spatial coordinates. Work surrounding 3D PIV highlighted in this thesis improves the efficiency with which SA reconstruction eliminates out of focus particle backscatter thereby improving accuracy using less resources. These developments bridge the gap between SAPIV and other 3D PIV techniques. In addition, coupled with the homography fit method based SA reconstruction technique, presented results show that SAPIV, for the same experimental setup, is significantly faster and cheaper while achieving the same high level of accuracy. Developments of theoretical aspects of LF imaging for 3D PIV to improve visibility in scenes with large volume backscatter are fitting for applications in other areas as well. Mobile robots, especially autonomous cars, currently utilize multiple sensors such as cameras, LIDAR and RADAR for the purpose of localization and perception. However, visual sensing and autonomy in the face of edge cases and unexpected changes in scenes such as poor lighting, extreme weather conditions with heavy rain or snow etc. remains a challenge. We present a framework to facilitate the use of any multi-camera system as an array for LF capture along with a rendering methodology formulation that allows us to render LF images along arbitrary surfaces in scenes. In addition, by implementing our rendering technique to run on graphical processing units (GPUs), which have recently become affordable and easily available, we demonstrate the use of LF imaging for real-time perception for the first time. We envision that this framework can eventually help improve perception for robots by supplementing higher level algorithms.
Description
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2018. Cataloged from PDF version of thesis. Includes bibliographical references (pages 139-147).
Date issued
2018Department
Massachusetts Institute of Technology. Department of Mechanical EngineeringPublisher
Massachusetts Institute of Technology
Keywords
Mechanical Engineering.