Neurosurgery is currently a common solution for decreasing the brain tumor burden, but it relies heavily on surgeons' experience, and intraoperative guidance still cannot be researched. To implement the real-time minimally invasive theranostics of brain tumors, the research on the intraoperative precision diagnosis and therapeutics need to be conducted. Here, we develop an optical minimally invasive theranostics system, which uses multimodal optical image including an optical coherence tomography (OCT), photoacoustic imaging (PAI) to visualize brain tumors, and uses supercontinuum laser to ablate brain tumors. The Fourier domain algorithm is employed to compute the optical attenuation coefficient (OAC) of OCT images for quantitatively identifying the brain cancerous tissue. Furthermore, we design an intelligent reseau-based optical theranostic method to integrate the intraoperative OCT imaging and laser ablation for treating brain tumors in vivo.
Fluorescence molecular tomography (FMT) imaging can be used to determine the location, size, and biodistribution of fluorophore biomarkers inside tissues. Yet when using FMT in the reflectance geometry it is challenging to accurately localize fluorophores. A depth perturbation method is proposed to determine the centroid of fluorophore inside a tissue-like medium. Through superposition of a known thin optical phantom onto the medium surface, the fluorophore depth is deliberately perturbed and signal localization is improved in a stable way. We hypothesize that the fluorophore centroid can be better localized through use of this fluorescent intensity variation resulting from the depth perturbation. This hypothesis was tested in tissue-like phantoms. The results show that a small-size fluorophore inclusion (1.2 mm3 volume, depth up to 4.8 mm) can be localized by the method with an error of 0.2 to 0.3 mm. The method is also proven to be capable of handling multiple fluorescent inclusion conditions with the assistance of other strategies. Additionally, our further studies showed that the method’s performance in the presence of background fluorophores indicated that the small inclusion could be located at a 1.8 (3.8) mm depth with accurate localization only when its concentration was not <10 (100) times the background level.
KEYWORDS: 3D image processing, 3D displays, Photography, Integral imaging, Image resolution, Lenses, Camera shutters, Autostereoscopic displays, Microlens array, Image enhancement
Most of the reported studies have focused on improving the viewing resolution of integral photography (IP) image, widening the viewing angle. To the best of our knowledge, there has been no report about producing an IP image with a depth of several meters for viewing with the naked eye. We developed a technique of three-dimensional (3-D) display for distant viewing of a 3-D image without the need for special glasses. The photo-based integral photography (IP) method enables precise 3-D images to be displayed at long viewing distances without any influence from deviated or distorted lenses in a lens array. We calculate the elemental images from a referential viewing area for each lens and project the corresponding result images through each lens. We succeeded in creating an image display that appears to have three-dimensionality even when viewed from a distance, with an image depth of 5.7 m or more in front of the display, and 3.5m or more behind the display. To the best of our knowledge, the long-distance IP display presented in this paper is technically unique as it is first report of generating an image with such a long viewing distance.
KEYWORDS: 3D image processing, 3D displays, Surgery, Navigation systems, LCDs, Image registration, Medical imaging, Photography, Optical tracking, Computing systems
A surgical navigation system that utilizes real-time three-dimensional (3D) image was developed. It superimposes the real, intuitive 3D image for medical diagnosis and operation. This system creates 3D image based on the principle of integral photography (IP), which can display geometrically accurate 3D autostereoscopic images and reproduce motion parallax without any need of special devices. We developed a new method for creating 3D autostereoscopic image, named Integral Videography (IV), which can also display moving 3D object. Thus the displayed image can be updated following the changes in surgeon's field of vision during the operation. 3D image was superimposed on the surgical fields in the patient via a half-silvered mirror as if they could be seen through the body. In addition, a real-time Integral Videography algorithm for calculating the 3D image of surgical instruments was used for registration between the location of surgical instruments and the organ during the operation. The experimental results of targeting point location and avoiding critical area showed the errors of this navigation system were in the range of 2-3mm. By introducing a display device with higher pixel density, accuracy of the system can be improved. Because of the simplicity and the accuracy of real-time projected point location, this system will be practically usable in the medical field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.