Modern laparoscopes are equipped with visible-light optical cameras that assist surgeons navigate human anatomy. However, as surgical procedures require precision, surgeons would benefit from auxiliary imaging technologies to reliably perform operations. To actualize this improvement, two cameras [near-infrared (NIR) and red-green-blue (RGB)] can be integrated into one housing module while maintaining centerpoint alignment and optimal image focus. We have designed a prototype that satisfies these requirements and features cameras that can be individually, translationally adjusted in the x, y, and z-directions. Tri-directional translation and tilt-angle fine-tuning allow the cameras to conform to the lens focal distance and ensure they capture the same visual field. To demonstrate the usefulness of this housing design, we characterize the specifications of optical alignment, field-of-views, and depth-of-focus and describe a custom fabricated snapshot imager for associated medical applications in real-time, intraoperative settings. Detailed info: The housing module consists of a casing module for each camera and a central cube that serves as an interface between the light-collection optics at the front of the cube and the two optical cameras. A dichroic filter, at 45-degrees, positioned within the cube transmits near-infrared wavelengths to the NIR camera at the back and reflects visible light to the RGB camera on the bottom. To improve image focus, the casing modules can be adjusted to move in and out of the cube and fine-tuned by varying the relative mounting screw tensions. Slots and spacers allow for calibration between the cameras and ensure they have the same centerpoint.
During thyroid surgery, parathyroid glands may be accidentally extracted due to their similar shapes and colors to the surrounding tissues (lymph nodes, fat, and thyroid tissue). In order to avoid damaging or resecting vulnerable glands, we aim to assist surgeons to better identify the parathyroids with real-time bounding boxes on a screen available in operating rooms. Parathyroids are auto fluorescent when excited with near infrared (NIR) light; therefore, videos recorded simultaneously in NIR, and RGB color formats can be used to train a deep learning model for robust object detection and localization without the need for expert annotation. The use of NIR images facilitates the generation of the ground truth dataset. We collected 16 patients' videos during total thyroidectomy. The videos were initially decomposed into a series of images taken at every 10 frames. From this, an intensity threshold was applied on the NIR images creating newer images where the parathyroid can be easily selected. Using these images, ground truth bounding boxes were generated. Our ground truth database size was over 600 images, of which 540 images contained parathyroid glands and 66 did not. We ran Faster R-CNN twice, initially to perform localization using the images with parathyroids only. The second method was to perform classification using the entire dataset. For the first method, we achieved an average intersection over union of 85% and for second, we obtained a precision of 98% and a recall of 100%. Given the limited dataset we are very excited with these results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.