Endoscopic visualization in brain tumor removal is challenging because tumor tissue is often visually indistinguishable from healthy tissue. Fluorescence imaging can improve tumor delineation, though this impairs reflectance-based visualization of gross anatomical features. To accurately navigate and resect tumors, we created an ultrathin/flexible, scanning fiber endoscope (SFE) that acquires reflectance and fluorescence wide-field images at high-resolution. Furthermore, our miniature imaging system is affixed to a robotic arm providing programmable motion of SFE, from which we generate multimodal surface maps of the surgical field.
To test this system, synthetic phantoms of debulked tumor from brain are fabricated having spots of fluorescence representing residual tumor. Three-dimension (3D) surface maps of this surgical field are produced by moving the SFE over the phantom during concurrent reflectance and fluorescence imaging (30Hz video). SIFT-based feature matching between reflectance images is implemented to select a subset of key frames, which are reconstructed in 3D by bundle adjustment. The resultant reconstruction yields a multimodal 3D map of the tumor region that can improve visualization and robotic path planning.
Efficiency of creating these maps is important as they are generated multiple times during tumor margin clean-up. By using pre-programmed vector motions of the robot arm holding the SFE, the computer vision algorithms are optimized for efficiency by reducing search times. Preliminary results indicate that the time for creating these 3D multimodal maps of the surgical field can be reduced to one third by using known trajectories of the surgical robot moving the image-guided tool.