Automated cell counting in in-vivo specular microscopy images is challenging, especially when single-cell segmentation methods fail due to corneal dystrophy. We aim to obtain reliable cell segmentation from specular microscopy images of both healthy and pathological corneas. Here, we cast the problem of cell segmentation as a supervised multi-class classification problem. Hence, the goal is to learn a mapping relation between an input specular microscopy image and its labeled counterpart, identifying healthy (cells) and dysfunctional regions (e.g., guttae). Using a generative adversarial approach, we trained a U-net model by extracting 96×96 pixel patches from corneal endothelial cell images and the corresponding manual segmentation by a group of expert physicians. Preliminary results show the method's potential to deliver reliable feature segmentation, enabling more accurate cell density estimations for assessing the cornea's state.
KEYWORDS: Imaging systems, 3D image processing, Calibration, Cameras, Ultrasonography, Stereo vision systems, Medical imaging, 3D modeling, Projection systems, 3D acquisition
We propose a three-dimensional (3D) multimodal medical imaging system that combines freehand ultrasound and structured light 3D reconstruction in a single coordinate system without requiring registration. To the best of our knowledge, these techniques have not been combined as a multimodal imaging technique. The system complements the internal 3D information acquired with ultrasound with the external surface measured with the structured light technique. Moreover, the ultrasound probe’s optical tracking for pose estimation was implemented based on a convolutional neural network. Experimental results show the system’s high accuracy and reproducibility, as well as its potential for preoperative and intraoperative applications. The experimental multimodal error, or the distance from two surfaces obtained with different modalities, was 0.12 mm. The code is available in a Github repository.
It has become customary to calibrate a camera-projector pair in a structured light (SL) system as a stereo-vision setup. The 3D reconstruction is carried out by triangulation from the detected point at the camera sensor and its correspondence at the projector DMD. There are several algebraic formulations to obtain the coordinates of the 3D point, especially in the presence of noise. However, it is not clear what is the best triangulation approach. In this study, we aimed to determine the most suitable triangulation method for SL systems in terms of accuracy and execution time. We assess different strategies in which both coordinates in the projector are known (point-point correspondence) and the case in which only the one coordinate in the DMD is known (pointline correspondence). We also introduce the idea of estimating the second projector coordinate with epipolar constraints. We carried out simulations and experiments to evaluate the differences between the triangulation methods, considering the phase-depth sensitivity of the system. Our results show that under suboptimal phasedepth sensitivity conditions, the triangulation method does influence the overall accuracy. Therefore, the system should be arranged for optimal phase-depth sensitivity so that any triangulation method ensures the same accuracy.
The growing need to perform surgical procedures, monitoring, and intervention of greater precision have led to the development of multimodal medical imaging systems. Multimodal images are a strategy to overcome the limitations of medical imaging technologies by combining the strengths of individual modalities or technologies. In this work, we propose a low-cost multimodal system that combines 3D freehand ultrasound with fringe projection profilometry to obtain information from the external and the internal structure of an object of interest. Both modalities are referred to a single coordinate system defined in the calibration to avoid post-processing and registration of the acquired images. The freehand ultrasound calibration results are similar to those previously reported in the literature using more expensive infrared tracking systems. The calibration reproducibility at the center point of the ultrasound image was 0.6202 mm for 8 independent calibrations. We tested our system on a breast phantom with tumors. Encouraging results show the potential of the system for applications in intraoperative settings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.