dc.contributor.advisor | Steven Dubowsky. | en_US |
dc.contributor.author | Lichter, Matthew D. (Matthew Daniel), 1977- | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Dept. of Mechanical Engineering. | en_US |
dc.date.accessioned | 2008-03-26T20:30:22Z | |
dc.date.available | 2008-03-26T20:30:22Z | |
dc.date.issued | 2005 | en_US |
dc.identifier.uri | http://hdl.handle.net/1721.1/30337 | |
dc.description | Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2005. | en_US |
dc.description | "February 2005." | en_US |
dc.description | Includes bibliographical references (leaves 133-140). | en_US |
dc.description.abstract | Future space missions are expected to use autonomous robotic systems to carry out a growing number of tasks. These tasks may include the assembly, inspection, and maintenance of large space structures; the capture and servicing of satellites; and the redirection of space debris that threatens valuable spacecraft. Autonomous robotic systems will require substantial information about the targets with which they interact, including their motions, dynamic model parameters, and shape. However, this information is often not available a priori, and therefore must be estimated in orbit. This thesis develops a method for simultaneously estimating dynamic state, model parameters, and geometric shape of arbitrary space targets, using information gathered from range imaging sensors. The method exploits two key features of this application: (1) the dynamics of targets in space are highly deterministic and can be accurately modeled; and (2) several sensors will be available to provide information from multiple viewpoints. These features enable an estimator design that is not reliant on feature detection, model matching, optical flow, or other computation-intensive pixel-level calculations. It is therefore robust to the harsh lighting and sensing conditions found in space. Further, these features enable an estimator design that can be implemented in real- time on space-qualified hardware. The general solution approach consists of three parts that effectively decouple spatial- and time-domain estimations. The first part, referred to as kinematic data fusion, condenses detailed range images into coarse estimates of the target's high-level kinematics (position, attitude, etc.). | en_US |
dc.description.abstract | (cont.) A Kalman filter uses the high-fidelity dynamic model to refine these estimates and extract the full dynamic state and model parameters of the target. With an accurate understanding of target motions, shape estimation reduces to the stochastic mapping of a static scene. This thesis develops the estimation architecture in the context of both rigid and flexible space targets. Simulations and experiments demonstrate the potential of the approach and its feasibility in practical systems. | en_US |
dc.description.statementofresponsibility | by Matthew D. Lichter. | en_US |
dc.format.extent | 160 leaves | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | M.I.T. theses are protected by
copyright. They may be viewed from this source for any purpose, but
reproduction or distribution in any format is prohibited without written
permission. See provided URL for inquiries about permission. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Mechanical Engineering. | en_US |
dc.title | Shape, motion, and inertial parameter estimation of space objects using teams of cooperative vision sensors | en_US |
dc.type | Thesis | en_US |
dc.description.degree | Ph.D. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Mechanical Engineering | en_US |
dc.identifier.oclc | 61126308 | en_US |