Show simple item record

dc.contributor.advisorHadfield-Menell, Dylan
dc.contributor.authorKondic, Jovana
dc.date.accessioned2024-03-15T19:24:11Z
dc.date.available2024-03-15T19:24:11Z
dc.date.issued2024-02
dc.date.submitted2024-02-21T17:10:11.873Z
dc.identifier.urihttps://hdl.handle.net/1721.1/153789
dc.description.abstractHuman cognition exhibits remarkable abilities in reasoning about the plans of others. Even infants can swiftly generate effective predictions from minimal observations. This capability largely stems from our ability to employ specific assumptions about others’ decision-making, while considering potential alternative interpretations that align with reality. Such versatility is particularly crucial in navigation tasks, where multiple strategies exist for avoiding obstacles and reaching a target location. A sophisticated autonomous system should, therefore, be capable of: (1) acknowledging the inherent uncertainty in various obstacle avoidance strategies; and (2) predicting motion plans in a way that recognizes the different possibilities in a given goal-driven navigation scenario. To address these needs, we introduce a framework that captures the stochastic nature of motion planning and prediction through Monte Carlo sampling techniques. We ensure (1) by shifting the focus from pure trajectory optimization to generating a variety of near-optimal paths, and achieve (2) by developing a prediction method capable of capturing the inherent multimodality in the distribution over goal-driven trajectories. For the former, we utilize Markov Chain Monte Carlo (MCMC) methods to obtain trajectory samples that approximate the Boltzmann distribution, a common model for approximate rationality, which incorporates a cost function derived from trajectory optimization literature. For the latter, we develop a Bayesian model of the observed agent, and utilize Bayesian inference to reason about the underlying end goals of their movement. We propose a sequential Monte Carlo method that adapts the MCMC trajectory sampling to construct plausible hypotheses about the agent’s motion plan and then updates these hypotheses in real-time with new observations. In experiments conducted within continuous, obstacle-laden environments, we demonstrate our framework’s effectiveness for both diversity-aware motion planning and robust inference of latent goals from partial, noisy observations.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleMonte Carlo Methods for Motion Planning and Goal Inference
dc.typeThesis
dc.description.degreeS.M.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeMaster
thesis.degree.nameMaster of Science in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record