Precise navigation is an important task in robot-assisted and minimally invasive surgery. The need for optical markers and a lack of distinct anatomical features on skin or organs complicate tissue tracking with commercial tracking systems. Previous work has shown the feasibility of a 3D optical coherence tomography based system for this purpose. Furthermore, convolutional neural networks have been proven to precisely detect shifts between volumes. However, most experiments have been performed with phantoms or ex-vivo tissue. We introduce an experimental setup and perform measurements on perfused and non-perfused (dead) tissue of in-vivo xenograft tumors. We train 3D siamese deep learning models and evaluate the precision of the motion prediction. The network's ability to predict shifts for different motion magnitudes and also the performance for the different volume axes are compared. The root-mean-square errors are 0:12mm and 0:08mm on perfused and non-perfused tumor tissue, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.