MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Shape, motion, and inertial parameter estimation of space objects using teams of cooperative vision sensors

Author(s)
Lichter, Matthew D. (Matthew Daniel), 1977-
Thumbnail
DownloadFull printable version (25.27Mb)
Other Contributors
Massachusetts Institute of Technology. Dept. of Mechanical Engineering.
Advisor
Steven Dubowsky.
Terms of use
M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
Future space missions are expected to use autonomous robotic systems to carry out a growing number of tasks. These tasks may include the assembly, inspection, and maintenance of large space structures; the capture and servicing of satellites; and the redirection of space debris that threatens valuable spacecraft. Autonomous robotic systems will require substantial information about the targets with which they interact, including their motions, dynamic model parameters, and shape. However, this information is often not available a priori, and therefore must be estimated in orbit. This thesis develops a method for simultaneously estimating dynamic state, model parameters, and geometric shape of arbitrary space targets, using information gathered from range imaging sensors. The method exploits two key features of this application: (1) the dynamics of targets in space are highly deterministic and can be accurately modeled; and (2) several sensors will be available to provide information from multiple viewpoints. These features enable an estimator design that is not reliant on feature detection, model matching, optical flow, or other computation-intensive pixel-level calculations. It is therefore robust to the harsh lighting and sensing conditions found in space. Further, these features enable an estimator design that can be implemented in real- time on space-qualified hardware. The general solution approach consists of three parts that effectively decouple spatial- and time-domain estimations. The first part, referred to as kinematic data fusion, condenses detailed range images into coarse estimates of the target's high-level kinematics (position, attitude, etc.).
 
(cont.) A Kalman filter uses the high-fidelity dynamic model to refine these estimates and extract the full dynamic state and model parameters of the target. With an accurate understanding of target motions, shape estimation reduces to the stochastic mapping of a static scene. This thesis develops the estimation architecture in the context of both rigid and flexible space targets. Simulations and experiments demonstrate the potential of the approach and its feasibility in practical systems.
 
Description
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2005.
 
"February 2005."
 
Includes bibliographical references (leaves 133-140).
 
Date issued
2005
URI
http://hdl.handle.net/1721.1/30337
Department
Massachusetts Institute of Technology. Department of Mechanical Engineering
Publisher
Massachusetts Institute of Technology
Keywords
Mechanical Engineering.

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.