Show simple item record

dc.contributor.authorParanawithana, Ishara
dc.contributor.authorYang, Liangjing
dc.contributor.authorChen, Zhong
dc.contributor.authorYoucef-Toumi, Kamal
dc.contributor.authorTan, U-Xuan
dc.date.accessioned2021-11-09T16:35:02Z
dc.date.available2021-11-09T16:35:02Z
dc.date.issued2018-08
dc.identifier.urihttps://hdl.handle.net/1721.1/137969
dc.description.abstract© 2018 IEEE. This work proposes a fusion mechanism that overcomes the traditional limitations in vision-guided micromanipulation in plant cells. Despite the recent advancement in vision-guided micromanipulation, only a handful of research addressed the intrinsic issues related to micromanipulation in plant cells. Unlike single cell manipulation, the structural complexity of plant cells makes visual tracking extremely challenging. There is therefore a need to complement the visual tracking approach with trajectory data from the manipulator. Fusion of the two sources of data is done by combining the projected trajectory data to the image domain and template tracking data using a score-based weighted averaging approach. Similarity score reflecting the confidence of a particular localization result is used as the basis of the weighted average. As the projected trajectory data of the manipulator is not at all affected by the visual disturbances such as regional occlusion, fusing estimations from two sources leads to improved tracking performance. Experimental results suggest that fusion-based tracking mechanism maintains a mean error of 2.15 pixels whereas template tracking and projected trajectory data has a mean error of 2.49 and 2.61 pixels, respectively. Path B of the square trajectory demonstrated a significant improvement with a mean error of 1.11 pixels with 50% of the tracking ROI occluded by plant specimen. Under these conditions, both template tracking and projected trajectory data show similar performances with a mean error of 2.59 and 2.58 pixels, respectively. By addressing the limitations and unmet needs in the application of plant cell bio-manipulation, we hope to bridge the gap in the development of automatic vision-guided micromanipulation in plant cells.en_US
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionof10.1109/coase.2018.8560699en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceOther repositoryen_US
dc.titleScene-Adaptive Fusion of Visual and Motion Tracking for Vision-Guided Micromanipulation in Plant Cellsen_US
dc.typeArticleen_US
dc.identifier.citationParanawithana, Ishara, Yang, Liangjing, Chen, Zhong, Youcef-Toumi, Kamal and Tan, U-Xuan. 2018. "Scene-Adaptive Fusion of Visual and Motion Tracking for Vision-Guided Micromanipulation in Plant Cells." IEEE International Conference on Automation Science and Engineering, 2018-August.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Mechanical Engineeringen_US
dc.relation.journalIEEE International Conference on Automation Science and Engineeringen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2020-08-13T18:19:20Z
dspace.date.submission2020-08-13T18:19:22Z
mit.journal.volume2018-Augusten_US
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record