Machine learning and coresets for automated real-time video segmentation of laparoscopic and robot-assisted surgery
Author(s)
Volkov, Mikhail; Hashimoto, Daniel A.; Rosman, Guy; Meireles, Ozanan R.; Rus, Daniela L
DownloadAccepted version (8.182Mb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
© 2017 IEEE. Context-aware segmentation of laparoscopic and robot assisted surgical video has been shown to improve performance and perioperative workflow efficiency, and can be used for education and time-critical consultation. Modern pressures on productivity preclude manual video analysis, and hospital policies and legacy infrastructure are often prohibitive of recording and storing large amounts of data. In this paper we present a system that automatically generates a video segmentation of laparoscopic and robot-assisted procedures according to their underlying surgical phases using minimal computational resources, and low amounts of training data. Our system uses an SVM and HMM in combination with an augmented feature space that captures the variability of these video streams without requiring analysis of the nonrigid and variable environment. By using the data reduction capabilities of online k-segment coreset algorithms we can efficiently produce results of approximately equal quality, in realtime. We evaluate our system in cross-validation experiments and propose a blueprint for piloting such a system in a real operating room environment with minimal risk factors.
Date issued
2017-05Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence LaboratoryPublisher
IEEE
Citation
Volkov, Mikhail, Hashimoto, Daniel A., Rosman, Guy, Meireles, Ozanan R. and Rus, Daniela. 2017. "Machine learning and coresets for automated real-time video segmentation of laparoscopic and robot-assisted surgery."
Version: Author's final manuscript