MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Scene-Adaptive Fusion of Visual and Motion Tracking for Vision-Guided Micromanipulation in Plant Cells

Author(s)
Paranawithana, Ishara; Yang, Liangjing; Chen, Zhong; Youcef-Toumi, Kamal; Tan, U-Xuan
Thumbnail
DownloadAccepted version (2.043Mb)
Open Access Policy

Open Access Policy

Creative Commons Attribution-Noncommercial-Share Alike

Terms of use
Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/
Metadata
Show full item record
Abstract
© 2018 IEEE. This work proposes a fusion mechanism that overcomes the traditional limitations in vision-guided micromanipulation in plant cells. Despite the recent advancement in vision-guided micromanipulation, only a handful of research addressed the intrinsic issues related to micromanipulation in plant cells. Unlike single cell manipulation, the structural complexity of plant cells makes visual tracking extremely challenging. There is therefore a need to complement the visual tracking approach with trajectory data from the manipulator. Fusion of the two sources of data is done by combining the projected trajectory data to the image domain and template tracking data using a score-based weighted averaging approach. Similarity score reflecting the confidence of a particular localization result is used as the basis of the weighted average. As the projected trajectory data of the manipulator is not at all affected by the visual disturbances such as regional occlusion, fusing estimations from two sources leads to improved tracking performance. Experimental results suggest that fusion-based tracking mechanism maintains a mean error of 2.15 pixels whereas template tracking and projected trajectory data has a mean error of 2.49 and 2.61 pixels, respectively. Path B of the square trajectory demonstrated a significant improvement with a mean error of 1.11 pixels with 50% of the tracking ROI occluded by plant specimen. Under these conditions, both template tracking and projected trajectory data show similar performances with a mean error of 2.59 and 2.58 pixels, respectively. By addressing the limitations and unmet needs in the application of plant cell bio-manipulation, we hope to bridge the gap in the development of automatic vision-guided micromanipulation in plant cells.
Date issued
2018-08
URI
https://hdl.handle.net/1721.1/137969
Department
Massachusetts Institute of Technology. Department of Mechanical Engineering
Journal
IEEE International Conference on Automation Science and Engineering
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
Paranawithana, Ishara, Yang, Liangjing, Chen, Zhong, Youcef-Toumi, Kamal and Tan, U-Xuan. 2018. "Scene-Adaptive Fusion of Visual and Motion Tracking for Vision-Guided Micromanipulation in Plant Cells." IEEE International Conference on Automation Science and Engineering, 2018-August.
Version: Author's final manuscript

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.