Laparoscopic surgery provides for patients such advantages as a small incision range and quick postoperative recovery. Unfortunately, surgeons struggle to grasp 3D spatial relationships in the abdominal cavity. Methods have been proposed to present the 3D information of the abdominal cavity using AR or VR. Although 3D geometrical information is crucial to perform such methods, it is difficult to reconstruct dense 3D organ shapes using a feature-point-based 3D reconstruction method such as structure from motion (SfM) due to the appearance characteristics of organs (e.g., texture-less and glossy). Our research solves this problem by estimating depth information from laparoscopic images using deep learning. We constructed a training dataset from both RGB and depth images with an RGB-D camera, implemented a depth image generator by applying a generative adversarial network (GAN), and generated a depth image from a single-shot RGB image. By calibration with a laparoscopic camera and an RGB-D camera, the laparoscopic image was transformed to an RGB image. We generated depth images by inputting the transformed laparoscopic images into a GAN generator. The scale parameter of the depth image with real-world dimensions was calculated by comparing the depth value and the 3D information estimated by SfM. Consequently, the density of the organ model increased by back-projecting the depth image to the 3D space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.