Open surgery represents a dominant proportion of procedures performed, but has lagged behind endoscopic surgery in video-based insights due to the difficulty obtaining high-quality open surgical video. Automated detection of the open surgical wound would enhance tracking and stabilization of body-worn cameras to optimize video capture for these procedures. We present results using a mask R-CNN to identify the surgical wound (the “area of interest”, AOI) in image sets derived from 27 open neck procedures (a 2310-image training/validation set and a 1163-image testing set). Bounding box application to the surgical wound was reliable (F-1 > 0.905) in the testing sets with a <5% false positive rate (recognizing non-wound areas as the AOI). Mask application to greater than 50% of the wound area also had good success (F-1 = 0.831) under parameters set for high specificity. When applied to short video clips as proof-of-principle, the model performed well both with emerging AOI (i.e., identifying the wound as incisions were developed) and with recapture of the AOI following obstruction). Overall, we identified image lighting quality and the presence of distractors (e.g., bloody sponges) as the primary sources of model errors on visual review. These data serve as a first demonstration of open surgical wound detection using first-person video footage, and sets the stage for further work in this area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.