Color accuracy is of immense importance in various fields, including biomedical applications, cosmetics, and multimedia. Achieving precise color measurements using diverse lighting sources is a persistent challenge. Recent advancements have resulted in the integration of LED-based digital light processing (DLP) technology into many scanning devices for three-dimensional (3D) imaging, often serving as the primary lighting source. However, such setups are susceptible to color-accuracy issues. Our study delves into DLP-based 3D imaging, specifically focusing on the use of hybrid lighting to enhance color accuracy. We presented an empirical dataset containing skin tone patches captured under various lighting conditions, including combinations and variations in indoor ambient light. A comprehensive qualitative and quantitative analysis of color differences (ΔE00) across the dataset was performed. Our results support the integration of DLP technology with supplementary light sources to achieve optimal color correction outcomes, particularly in skin tone reproduction, which has significant implications for biomedical image analysis and other color-critical applications.
KEYWORDS: Fringe analysis, Digital Light Processing, Light sources and illumination, Skin, Projection systems, Light sources, Color tone, Biomedical applications, 3D modeling, Stereoscopy, Digital image processing
Color accuracy is crucial in several domains such as biomedical imaging, cosmetics, and multimedia. Digital Light Processing (DLP) with LEDs has increasingly become a popular lighting source in 3D scanning systems. Although DLP provides advantages in 3D reconstruction, it poses challenges in maintaining color accuracy. Our research focused on using hybrid lighting to improve the color accuracy of DLP-based 3D sensing systems. We developed an empirical dataset featuring skin tones captured under multiple lighting environments, including variations in indoor ambient lighting. Through qualitative and quantitative evaluations of color differences, we conclude that including auxiliary lighting with DLP is beneficial for color accuracy, particularly in biomedical imaging and other applications in which color accuracy is essential.
Reliably detecting or tracking 3D features is challenging. It often requires preprocessing and filtering stages, along with fine-tuned heuristics for reliable detection. Alternatively, artificial intelligence-based strategies have recently been proposed; however, these typically require many manually labeled images for training. We introduce a method for 3D feature detection by using a convolutional neural network and a single 3D image obtained by fringe projection profilometry. We cast the problem of 3D feature detection as an unsupervised detection problem. Hence, the goal is to use a neural network that learns to detect specific features in 3D images using a single unlabeled image. Therefore, we implemented a deep-learning method that exploits inherent symmetries to detect objects with few training data and without ground truth. Subsequently, using a pyramid methodology of rescaling each image to be processed, we achieved feature detections of different sizes. Finally, we unified the detections using a non-maximum suppression algorithm. Preliminary results show that the method provides reliable detection under different scenarios with a more flexible training procedure than other competing methods.
Improving the accuracy of structured light calibration methods has led to the development of pixel-wise calibration models built on top of conventional pinhole-camera models. Because phase encodes depth and transversal information, the pixel-wise methods provide high flexibility to map phase to XYZ coordinates. However, there are different approaches for producing phase-to-coordinate mapping, and there is no consensus on the most appropriate one. In this study, we highlight the current limitations, especially in depth range and accuracy, of several recent pixel-wise calibration methods, along with experimental performance verifications. The results show that there are opportunities for further improving these methods to overcome existing limitations from conventional calibration methods, particularly for low-cost hardware
Corneal endothelium assessment is carried out via specular microscopy imaging. However, automated image analysis often fails due to inadequate image quality conditions or the presence of dark regions in pathologies such as Fuchs’ dystrophy. Therefore, an early reliable image classification strategy is required before automated evaluation based on cell segmentation. Moreover, conventional classification approaches rely on manually labeled data which are difficult to obtain. We propose a two-stage semi-supervised classification algorithm, feature detection and prediction of a blurring level and guttae severity that allows us to cluster images based on the degree of segmentation complexity. For validation, we developed a web-based annotation application and surveyed a pair of expert ophthalmologists for grading a portion of the 1169 images. Preliminary results show that this approach provides a reliable and fast approach for corneal endothelial cell (CEC) image classification.
In structured-light systems, the lens distortions of the camera and the projector reduce the measurement accuracy when calibrated as a standard stereo-vision system. The conventional compensation via distortion coefficients reduces the error, but still leaves a significant residual. Recently, we proposed a hybrid calibration procedure that leverages the standard calibration approach to improve measurement accuracy. This hybrid procedure consisted of building a pixel-wise phase-to-coordinate mapping based on adjusted 3D data obtained from the standard stereo-vision method. Here, we show experimentally that the measurement accuracy can be significantly improved, even using the linear pinhole model and linear mapping functions. We then move to consider the nonlinear model to improve the measurement accuracy further. Encouraging results show that this new calibration method increases the measurement accuracy without requiring elaborate calibration procedures or sophisticated ancillary equipment.
Fringe Projection Profilometry (FPP) with Digital Light Projector technology is one of the most reliable 3D sensing techniques for biomedical applications. However, besides the fringe pattern images,often a color texture image is needed for an accurate medical documentation. This image may be acquired either by projecting a white image or a black image and relying on ambient light. Color Constancy is essential for a faithful digital record, although the optical properties of biological tissue make color reproducibility challenging. Furthermore, color perception is highly dependent on the illuminant. Here, we describe a deep learning-based method for skin color correction in FPP. We trained a convolutional neural network using a skin tone color palette acquired under different illumination conditions to learn the mapping relationship between the input color image and its counterpart in the sRGB color space. Preliminary experimental results demonstrate the potential for this approach.
Automated cell counting in in-vivo specular microscopy images is challenging, especially when single-cell segmentation methods fail due to corneal dystrophy. We aim to obtain reliable cell segmentation from specular microscopy images of both healthy and pathological corneas. Here, we cast the problem of cell segmentation as a supervised multi-class classification problem. Hence, the goal is to learn a mapping relation between an input specular microscopy image and its labeled counterpart, identifying healthy (cells) and dysfunctional regions (e.g., guttae). Using a generative adversarial approach, we trained a U-net model by extracting 96×96 pixel patches from corneal endothelial cell images and the corresponding manual segmentation by a group of expert physicians. Preliminary results show the method's potential to deliver reliable feature segmentation, enabling more accurate cell density estimations for assessing the cornea's state.
KEYWORDS: Imaging systems, 3D image processing, Calibration, Cameras, Ultrasonography, Stereo vision systems, Medical imaging, 3D modeling, Projection systems, 3D acquisition
We propose a three-dimensional (3D) multimodal medical imaging system that combines freehand ultrasound and structured light 3D reconstruction in a single coordinate system without requiring registration. To the best of our knowledge, these techniques have not been combined as a multimodal imaging technique. The system complements the internal 3D information acquired with ultrasound with the external surface measured with the structured light technique. Moreover, the ultrasound probe’s optical tracking for pose estimation was implemented based on a convolutional neural network. Experimental results show the system’s high accuracy and reproducibility, as well as its potential for preoperative and intraoperative applications. The experimental multimodal error, or the distance from two surfaces obtained with different modalities, was 0.12 mm. The code is available in a Github repository.
In structured-light systems, the lens distortions of the camera and the projector reduce the measurement accuracy when calibrated as a standard stereo-vision system. The conventional compensation via distortion coefficients reduces the error, but still leaves a significant residual. Recently, we proposed a hybrid calibration procedure that leverages the standard calibration approach to improve measurement accuracy. This hybrid procedure consisted of building a pixel-wise phase-to-coordinate mapping based on adjusted 3D data obtained from the standard stereo-vision method. Here, we show experimentally that the measurement accuracy can be significantly improved, even using the linear pinhole model and linear mapping functions. We then move to consider the nonlinear model to improve the measurement accuracy further. Encouraging results show that this new calibration method increases the measurement accuracy without requiring elaborate calibration procedures or sophisticated ancillary equipment.
Automated cell counting in in-vivo specular microscopy images is challenging, especially in situations where single-cell segmentation methods fail due to pathological conditions. This work aims to obtain reliable cell segmentation from specular microscopy images of both healthy and pathological corneas. We cast the problem of cell segmentation as a supervised multi-class segmentation problem. The goal is to learn a mapping relation between an input specular microscopy image and its labeled counterpart, indicating healthy (cells) and pathological regions (e.g., guttae). We trained a U-net model by extracting 96×96 pixel patches from corneal endothelial cell images and the corresponding manual segmentation by a physician. Encouraging results show that the proposed method can deliver reliable feature segmentation enabling more accurate cell density estimations for assessing the state of the cornea.
The implementation and generation of synthetic data for testing algorithms in optical metrology are often difficult to reproduce. In this work, we propose a framework for the generation of reproducible synthetic surface data. We present two study cases using the Code Ocean platform, which is based on Docker and Linux container technologies to turn source code repositories into executable images. i) We simulate interference pattern fringe images as acquired by a Michelson interferometric system. The reflectivity changes due to surface topography and roughness. ii) We simulate phase maps from rough isotropic surfaces. The phase data is simultaneously corrupted by noise and phase dislocations. This method relies on Gaussian-Laplacian pyramids to preserve surface features on different scales. The proposed framework enables reproducible surface data simulations, which could increase the impact of algorithm development in optical metrology.
It has become customary to calibrate a camera-projector pair in a structured light (SL) system as a stereo-vision setup. The 3D reconstruction is carried out by triangulation from the detected point at the camera sensor and its correspondence at the projector DMD. There are several algebraic formulations to obtain the coordinates of the 3D point, especially in the presence of noise. However, it is not clear what is the best triangulation approach. In this study, we aimed to determine the most suitable triangulation method for SL systems in terms of accuracy and execution time. We assess different strategies in which both coordinates in the projector are known (point-point correspondence) and the case in which only the one coordinate in the DMD is known (pointline correspondence). We also introduce the idea of estimating the second projector coordinate with epipolar constraints. We carried out simulations and experiments to evaluate the differences between the triangulation methods, considering the phase-depth sensitivity of the system. Our results show that under suboptimal phasedepth sensitivity conditions, the triangulation method does influence the overall accuracy. Therefore, the system should be arranged for optimal phase-depth sensitivity so that any triangulation method ensures the same accuracy.
The growing need to perform surgical procedures, monitoring, and intervention of greater precision have led to the development of multimodal medical imaging systems. Multimodal images are a strategy to overcome the limitations of medical imaging technologies by combining the strengths of individual modalities or technologies. In this work, we propose a low-cost multimodal system that combines 3D freehand ultrasound with fringe projection profilometry to obtain information from the external and the internal structure of an object of interest. Both modalities are referred to a single coordinate system defined in the calibration to avoid post-processing and registration of the acquired images. The freehand ultrasound calibration results are similar to those previously reported in the literature using more expensive infrared tracking systems. The calibration reproducibility at the center point of the ultrasound image was 0.6202 mm for 8 independent calibrations. We tested our system on a breast phantom with tumors. Encouraging results show the potential of the system for applications in intraoperative settings.
Fringe Projection Profilometry (FPP) is a widely used technique for optical three-dimensional (3D) shape measurement. Among the existing 3D shape measurement techniques, FPP provides a whole-field 3D reconstruction of objects in a non-contact manner, with high resolution, and fast data processing. The key to accurate 3D shape measurement is the proper calibration of the measurement system. Currently, most calibration procedures in FPP rely on phase-coordinate mapping (PCM) or back-projection stereo-vision (SV) methods. The PCM technique consists in mapping experimental metric XYZ coordinates to recovered phase values by fitting a predetermined function. However, it requires accurately placing 2D or 3D targets at different distances and orientations. Conversely, in the SV method, the projector is regarded as an inverse camera, and the system is modeled using triangulation principles. Therefore, the calibration process can be carried out using 2D targets placed in arbitrary positions and orientations, resulting in a more flexible procedure. In this work, we propose a hybrid calibration procedure that combines SV and PCM methods. The procedure is highly flexible, robust to lens distortions, and has a simple relationship between phase and coordinates. Experimental results show that the proposed method has advantages over the conventional SV model since it needs fewer acquired images for the reconstruction process, and due to its low computational complexity the reconstruction time decreases significantly.
Accurate 3D imaging of human skin features with structured light methods is hindered by subsurface scattering, the presence of hairs and patient movement. In this work, we propose a wide-field 3D imaging system capable of reconstructing large areas, e.g. the whole surface of the forearm, with an axial accuracy in the order of 10 microns for measuring scattered skin features, like lesions. By pushing the limits of grating projection we obtain high-quality fringes within a limited depth of field. We use a second projector for accurate positioning of the object. With two or more cameras we achieve independent 3D reconstructions automatically merged in a global coordinate system. With the positioning strategy, we acquire two consecutive images for absolute phase retrieval using Fourier Transform Profilometry to ensure accurate phase-to-height mapping. Encouraging experimental results show that the system is able to measure precisely skin features scattered in a large area.
KEYWORDS: Skin, Calibration, Cameras, 3D modeling, 3D metrology, Imaging systems, Fringe analysis, 3D image processing, Dermatology, Profilometers, 3D acquisition, 3D imaging metrology, Medical diagnostic instruments
The skin prick test (SPT) is the standard method for the diagnosis of allergies. It consists in placing an array of allergen drops on the skin of a patient, typically the volar forearm, and pricking them with a lancet to provoke a specific dermal reaction described as a wheal. The diagnosis is performed by measuring the diameter of the skin wheals, although wheals are not usually circular which leads to measurement inconsistencies. Moreover, the conventional approach is to measure their size with a ruler. This method has been proven prone to inter- and intra-observer variations. We have developed a 3D imaging system for the 3D reconstruction of the SPT. Here, we describe the proposed method for the automatic measurements of the wheals based on 3D data processing to yield reliable results. The method is based on a robust parametric fitting to the 3D data for obtaining the diameter directly. We evaluate the repeatability of the system under 3D reconstructions for different object poses. Although the system provides higher accuracy in the measurement, we compare the results to those produced by a physician.
The depth of focus (DOF) defines the axial range of high lateral resolution in the image space for object position. Optical devices with a traditional lens system typically have a limited DOF. However, there are applications such as in ophthalmology, which require a large DOF in comparison to a traditional optical system, this is commonly known as extended DOF (EDOF). In this paper we explore Programmable Diffractive Optical Elements (PDOEs), with EDOF, as an alternative solution to visual impairments, especially presbyopia. These DOEs were written onto a reflective liquid cystal on silicon (LCoS) spatial light modulator (SLM). Several designs of the elements are analyzed: the Forward Logarithmic Axicon (FLAX), the Axilens (AXL), the Light sword Optical Element (LSOE), the Peacock Eye Optical Element (PE) and Double Peacock Eye Optical Element (DPE). These elements focus an incident plane wave into a segment of the optical axis. The performances of the PDOEs are compared with those of multifocal lenses. In all cases, we obtained the point spread function and the image of an extended object. The results are presented and discussed.
Pixelated liquid crystal displays have been widely used as spatial light modulators to implement programmable diffractive optical elements, particularly diffractive lenses. Many different applications of such components have been developed in information optics and optical processors that take advantage of their properties of great flexibility, easy and fast refreshment, and multiplexing capability in comparison with equivalent conventional refractive lenses. We explore the application of programmable diffractive lenses displayed on the pixelated screen of a liquid crystal on silicon spatial light modulator to ophthalmic optics. In particular, we consider the use of programmable diffractive lenses for the visual compensation of refractive errors (myopia, hypermetropia, astigmatism) and presbyopia. The principles of compensation are described and sketched using geometrical optics and paraxial ray tracing. For the proof of concept, a series of experiments with artificial eye in optical bench are conducted. We analyze the compensation precision in terms of optical power and compare the results with those obtained by means of conventional ophthalmic lenses. Practical considerations oriented to feasible applications are provided.
Pixelated liquid crystal displays have been widely used as spatial light modulators to implement programmable diffractive optical elements (DOEs), particularly diffractive lenses. Many different applications of such components have been developed in information optics and optical processors that take advantage of their properties of great flexibility, easy and fast refreshment, and multiplexing capability in comparison with equivalent conventional refractive lenses. In this paper, we explore the application of programmable diffractive lenses displayed on the pixelated screen of a liquid crystal on silicon spatial light modulator (LCoS-SLM) to ophthalmic optics. In particular, we consider the use of programmable diffractive lenses for the visual compensation of some refractive errors (myopia, hyperopia). The theoretical principles of compensation are described and sketched using geometrical optics and paraxial ray tracing. A series of experiments with artificial eye in optical bench are conducted to analyze the compensation accuracy in terms of optical power and to compare the results with those obtained by means of conventional ophthalmic lenses. Practical considerations oriented to feasible applications are provided.
The "peacock eye" phase diffractive element focuses an incident plane wave into a segment of the optical axis although
it introduces certain amount of aberration. This paper evaluates the extended depth of focus imaging performance of the
peacock eye phase diffractive element and explores some potential applications in ophthalmic optics. Two designs of the
element are analyzed: a single peacock eye, which produces one focal segment along the axis, and a double peacock eye,
which is a spatially multiplexed element, that produces two focal segments with partial overlapping along the axis. The
performances of the peacock eye-based elements are compared with the performance of a multifocal lens in the image
space through numerical simulations as well as optical experiments. In all the cases considered, we obtain the point
spread function and the image of an extended object. The results are presented and discussed.
The aged human eye is commonly affected by presbyopia, and therefore, it gradually loses its capability to form images of objects placed at different distances. Extended depth of focus (EDOF) imaging elements can overcome this inability, despite the introduction of a certain amount of aberration. This paper evaluates the EDOF imaging performance of the so-called peacock eye phase diffractive element, which focuses an incident plane wave into a segment of the optical axis and explores the element's potential use for ophthalmic presbyopia compensation optics. Two designs of the element are analyzed: the single peacock eye, which produces one focal segment along the axis, and the double peacock eye, which is a spatially multiplexed element that produces two focal segments with partial overlapping along the axis. The performances of the peacock eye elements are compared with those of multifocal lenses through numerical simulations as well as optical experiments in the image space. The results demonstrate that the peacock eye elements form sharper images along the focal segment than the multifocal lenses and, therefore, are more suitable for presbyopia compensation. The extreme points of the depth of field in the object space, which represent the remote and the near object points, have been experimentally obtained for both the single and the double peacock eye optical elements. The double peacock eye element has better imaging quality for relatively short and intermediate distances than the single peacock eye, whereas the latter seems better for far distance vision.
The paper presents a proposal of multiorder varifocal moiré zone plates, which change their focal length because
of the lateral displacement of their two components with transmittances described by a cubic profile. The newly
introduced element turns out to be an intermediate solution of the hitherto existing elements, which are the
refractive Alvarez lens and its diffractive counterpart. Some of the expected properties of multiorder varifocal
moiré zone plates are discussed, as well as reasons, because of which this newly introduced set of elements can
be of interest in practical applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.