Show simple item record

dc.contributor.authorMeireles, Ozanan R.
dc.contributor.authorRosman, Guy
dc.contributor.authorAltieri, Maria S.
dc.contributor.authorCarin, Lawrence
dc.contributor.authorHager, Gregory
dc.contributor.authorMadani, Amin
dc.contributor.authorPadoy, Nicolas
dc.contributor.authorPugh, Carla M.
dc.contributor.authorSylla, Patricia
dc.contributor.authorWard, Thomas M.
dc.contributor.authorHashimoto, Daniel A.
dc.date.accessioned2021-11-01T14:33:50Z
dc.date.available2021-11-01T14:33:50Z
dc.date.issued2021-07-06
dc.identifier.urihttps://hdl.handle.net/1721.1/136860
dc.description.abstractAbstract Background The growing interest in analysis of surgical video through machine learning has led to increased research efforts; however, common methods of annotating video data are lacking. There is a need to establish recommendations on the annotation of surgical video data to enable assessment of algorithms and multi-institutional collaboration. Methods Four working groups were formed from a pool of participants that included clinicians, engineers, and data scientists. The working groups were focused on four themes: (1) temporal models, (2) actions and tasks, (3) tissue characteristics and general anatomy, and (4) software and data structure. A modified Delphi process was utilized to create a consensus survey based on suggested recommendations from each of the working groups. Results After three Delphi rounds, consensus was reached on recommendations for annotation within each of these domains. A hierarchy for annotation of temporal events in surgery was established. Conclusions While additional work remains to achieve accepted standards for video annotation in surgery, the consensus recommendations on a general framework for annotation presented here lay the foundation for standardization. This type of framework is critical to enabling diverse datasets, performance benchmarks, and collaboration.en_US
dc.publisherSpringer USen_US
dc.relation.isversionofhttps://doi.org/10.1007/s00464-021-08578-9en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceSpringer USen_US
dc.titleSAGES consensus recommendations on an annotation framework for surgical videoen_US
dc.typeArticleen_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2021-08-07T03:38:54Z
dc.language.rfc3066en
dc.rights.holderThe Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature
dspace.embargo.termsY
dspace.date.submission2021-08-07T03:38:54Z
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Needed


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record