Augmented Reality Driving Using Semantic Geo-Registration
Author(s)
Villamil, Ryan; Samarasekera, Supun; Chiu, Han-Pang; Murali, Varun; Munoz Kessler, Rodrigo Arturo; Kumar, Rakesh; ... Show more Show less
Downloadchiu.pdf (2.773Mb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
We propose a new approach that utilizes semantic information to register 2D monocular video frames to the world using 3D geo-referenced data, for augmented reality driving applications. The geo-registration process uses our predicted vehicle pose to generate a rendered depth map for each frame, allowing 3D graphics to be convincingly blended with the real world view. We also estimate absolute depth values for dynamic objects, up to 120 meters, based on the rendered depth map and update the rendered depth map to reflect scene changes over time. This process also creates opportunistic global heading measurements, which are fused with other sensors, to improve estimates of the 6 degrees-of-freedom global pose of the vehicle over state-of-the-art outdoor augmented reality systems [5, 18, 19]. We evaluate the navigation accuracy and depth map quality of our system on a driving vehicle within various large-scale environments for producing realistic augmentations.
Date issued
2018-03Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Aeronautics and Astronautics; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
Chiu, Han-Pang, Varun Murali, Ryan Villamil, G. Drew Kessler, Supun Samarasekera, and Rakesh Kumar. “Augmented Reality Driving Using Semantic Geo-Registration.” 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (March 2018).
Version: Author's final manuscript
ISBN
978-1-5386-3365-6