KEYWORDS: Light sources and illumination, Super resolution, Reconstruction algorithms, Imaging systems, Image restoration, Convolution, Real time imaging, Microscopy, Live cell imaging, Image processing, GPU based image processing
Structured illumination microscopy is more suitable for super-resolution imaging of live cells and holds promising prospects in bioscience thanks to its fast imaging speed and low photodamage. However, in order to achieve fast, real-time and long-term imaging of live cells, further enhancements in imaging speed and reductions in photodamage are necessary. In this work, we optimize from both algorithm and hardware aspects. First, a Fourier ptychographic SIM algorithm (FP-SIM) using only three patterns is proposed to reduce the number of frames required for reconstruction. Second, we use CUDA to design appropriate parallel methods on hardware to accelerate the algorithm. We demonstrate that GPU improves the speed by nearly 200-fold in the most time-consuming iteration part of the algorithm. At 512×512 pixels, real-time super-resolution microscopy imaging at approximately 16Hz can be achieved. Faster imaging speed and less photodamage enable our method to provide a promising tool for life science research and biomedical measurement.
Three-dimensional (3D) Structured illumination microscopy (SIM) has become one of the most commonly implemented fluorescence super-resolution modes in life sciences due to its unique advantages of wide field of view, fast imaging, and weak phototoxicity and photobleaching. However, the traditional two-dimensional (2D) SIM suffers from the "missing cone” problem, which makes it impossible to realize true three-dimensional (3D) imaging. In order to solve this problem, 3D SIM has been developed to achieve twice the resolution in both lateral and axial directions. Recently, we propose a tiled and layer-adaptive 3D SIM based on principal component analysis (PCA) to solve the computational complexity and time-consuming, local perturbation of illumination parameters, and microscope moving mechanical errors faced by 3D SIM in parameter estimation. The algorithm accelerates and improves the accuracy of transverse and axial illumination parameters, which is expected to achieve fast, iteration-free, high-precision, high-quality 3D SIM super-resolution imaging.
Structured illumination microscopy (SIM) is more applicable to the super-resolution imaging of living cells by virtue of its wide field of view, fast imaging and low phototoxicity. However, a high-quality super-resolution image requires accurate parameter estimation. Recently, we have proposed an efficient and robust SIM algorithm based on principal component analysis (PCA-SIM) that integrates iteration-free reconstruction, noise robustness, and limited computational complexity. Nevertheless, as with many parameter estimation algorithms, the performance of PCA-SIM may be affected when using high-frequency sinusoidal illumination and total internal reflection fluorescence (TIRF) objective. In this work, we present a parameter estimation method of combining cross-correlation and principal component analysis, capable of accurate sub-pixel precision estimation when the 1-order spectral information is lacking without iteration, promising to achieve high-speed, long-term, artifact-free super-resolution imaging of live cells.
Structured illumination microscopy (SIM) stands out among full-field super-resolution imaging modes in life sciences because of its high imaging speed, low phototoxicity, and low photobleaching. Traditional SIM technology requires accurate illumination parameters of 9 original images to achieve artifact-free super-resolution image reconstruction. Currently, the most popular algorithm with excellent parameter estimation performance is the two-dimensional cross-correlation algorithm, which is implemented by a large number of cross-correlation calculations in each direction. However, this computationally intensive algorithm isn’t a better choice for the technical application of real-time and long-term live cell imaging. In this work, on the premise of ensuring the accuracy of parameter estimation and noise resistance, we propose a bisection-based parameter estimation algorithm that can reduce the number of cross-correlation calculations in each direction by an order of magnitude. In the algorithm, the whole pixel position of the wave vector is first determined. Then the cross-correlation value at both ends of the XY direction is calculated, and the larger cross-correlation value position and the middle position are taken as the position for the next cross-correlation value calculation, so as to gradually approach the actual wave vector position from coarse to fine. To verify the proposed algorithm, super-resolution image reconstruction for fluorescent samples was performed. The experimental results show that compared with traditional SIM algorithms, the proposed parameter estimation algorithm is more accurate and anti-noise, and less computationally intensive (with only about 1/10 of the original cross-correlation value), which is highly significant for the technical application of real-time and long-term live cell imaging.
In the field of 3D measurement, fringe projection profilometry attracts the most interest due to its high precision and convenience. However, it is still challenging to retrieve the unambiguous absolute phase from a single fringe image. In this paper, we propose a deep learning-based method for retrieving the absolute phase of triangular-wave embedded fringe images. Through the learning of a large amount of data, we use two neural networks to obtain high-precision wrapped phase and coarse absolute phase from the triangular-wave embedded fringe images respectively so as to obtain accurate fringe order. Combining the wrapped phase and fringe order, we can obtain high-precision absolute phases. The experimental results demonstrate that compared with our previous proposed composite dual-frequency fringe coding strategy, the fringe image of the new triangular-wave embedded fringe coding strategy as the input of the network can obtain the absolute phase with higher accuracy.
Structured illumination microscopy (SIM) is a powerful super-resolution method in bioscience, featuring full-field imaging and high photon efficiency. However, artifact-free super-resolution image reconstruction requires precise knowledge about the illumination parameters. In this work, we propose an efficient and robust SIM algorithm based on principal component analysis (PCA-SIM) combines iteration-free reconstruction, noise robustness, and limited computational complexity. These characteristics make PCA-SIM a promising method for high-speed, long-term, artifact-free super-resolution imaging of live cells.
KEYWORDS: Phase shifts, Microscopy, Real time imaging, Image processing, Super resolution, Optical transfer functions, Modulation, Microscopes, Luminescence, Double patterning technology
KEYWORDS: Super resolution, Reconstruction algorithms, Phase shift keying, Signal to noise ratio, Modulation, Principal component analysis, Microscopy, Fourier transforms, Optical transfer functions, Image resolution
Structured illumination microscopy (SIM) is a widely available super-resolution technique for bioscience, especially for living cell research, due to its high photon efficiency. However, the quality of SIM depends extremely on the post-processing algorithms (parameter estimation and image reconstruction), where parameter estimation is the critical guarantee for successful super-resolution reconstruction. In this letter, we present a novel SIM approach based on principal component analysis (PCA-SIM) that statistically purifies experimental parameters from noise contamination to achieve high-definition super-resolution reconstruction. Experiments demonstrate that our method achieves more accurate (0.01 pixel wave vector and 0.1% of 2π initial phase) parameter estimation and superior noise immunity with an order of magnitude higher efficiency than conventional cross-correlation-based methods, offering the possibility of faster, less photon dose, longer duration living cell SIM.
Recovering high-precision 3D information of dynamic scenes from single-frame fringe pattern is a major challenge in the field of fringe projection profilometry (FPP). Inspired by the successful application of deep learning in the field of FPP, we achieve single-frame, high-precision 3D measurement through the combination of data driven and physical model-based approaches. More specifically, we utilize deep learning with powerful feature extraction ability to reduce the number of fringe images required for phase demodulation to the physical limit. Then stereo phase unwrapping (SPU) approach based on geometric constraint is used to unwrap the high frequency wrapped phases obtained from deep learning, which maximizes the efficiency of FPP without projecting additional auxiliary patterns. Experimental results demonstrate that our method can realize high-precision 3D measurement only by a single projection, overcoming the motion sensitivity problem compared to traditional methods in dynamic scenes.
KEYWORDS: Composites, Fringe analysis, 3D modeling, 3D metrology, Phase retrieval, Data modeling, Modulation, Neural networks, Spatial frequencies, Projection systems
In recent years, due to the rapid development of deep learning technology in computer vision, deep learning has gradually penetrated into fringe projection profilometry (FPP) to improve the efficiency of three-dimensional (3D) shape measurement and solve the problem of phase/or depth retrieval accuracy. In order to measure dynamic scenes or high-speed events, the single-shot fringe projection technique, due to its single-frame measurement property that can completely overcome the motion-induced errors of the object, becomes one of the optimal options. In this paper, we introduce a deep learning-enabled single-shot fringe projection profilometry with a composite coding strategy. By combining an FPP physical model-based network architecture with a large dataset, we demonstrate that models generated by training an improved deep convolutional neural network can directly perform high-precision phase retrieval on a single fringe image.
Optical three-dimensional(3D) shape measurement technology has been widely used in industrial manufacturing, defect detection, reverse engineering, human modeling, pattern recognition and other fields. As industrial standards continue to advance, we are demanding more and more functionality and performance from our imaging systems. At present, although a real-time imaging system based on visible light from fringe projection can image well and achieve real-time imaging speed, it is still not applicable to face scanning, shaded object imaging, etc. However, most of the 3D imaging based on the infrared projector cannot achieve the effect of real-time imaging due to the slow scanning speed. In this paper, we combine the near-infrared structured light illumination system with the stereo phase unwrapping method with multi-camera calibration to realize high-precision real-time 3D imaging. The MEMS near-infrared fringe projection device is used as a structured light illumination source, which can reduce the damage of visible structured light to human eyes and animals. Experiments are carried out on static and dynamic scenes, and it is verified that the designed system can achieve high-speed and high precision 3D reconstruction at the speed of 100 frames per second, and the measurement accuracy is about 100µm.
In fringe projection profilometry (FPP), efficiently recovering the absolute phase has always been a great chal-lenge. The stereo phase unwrapping (SPU) technologies based on geometric constraints can eliminate phase ambiguity without projecting any additional patterns, which maximizes the efficiency of the retrieval of abso-lute phase. Inspired by recent successes of deep learning for phase analysis, we demonstrate that deep learning can be an effective tool that organically unifies phase retrieval, geometric constraints, and phase unwrapping into a comprehensive framework. Driven by extensive training dataset, the properly trained neural network can achieve high-quality phase retrieval and robust phase ambiguity removal from only single-frame projection. Experimental results demonstrate that compared with conventional SPU, our deep-learning-based approach can more efficiently and robustly unwrap the phase of dense fringe images in a larger measurement volume with fewer camera views.
Stereo vision plays an essential role in non-contact 3D measurement, which employs two cameras to achieve applications such as visual synthesis, terrain surveying, and deformation detection. The commonly used Scheimpflug principle is expressed as the object plane, the image plane, and the lens plane intersect in a line, based on which stereo cameras can be slantwise focused on the object space with an overlapping field of view and depth of field. Based on our previously proposed calibration method, a stereo-rectification of Scheimpflug telecentric lenses is proposed in this paper. The effectiveness and accuracy of the proposed methods are verified by experiments.
Among the popular fluorescence super-resolution microscopy imaging technologies that had broken the optical diffraction limit, structured illumination microscopy (SIM) holds the advantages of low phototoxicity, weak photobleaching, and fast imaging speed, and it is currently one of the mainstream technologies for super-resolution microscopy imaging of living cells. SIM uses the modulation of the structured illumination patterns to encode highfrequency information in the raw images into the low-frequency region, allowing it to pass through the optical transfer function (OTF), and then obtains super-resolution images through demodulation and reconstruction. The reconstructed image is affected by some important parameters of the illumination light field, so it is necessary to accurately estimate the unknown parameters of the illumination light field, especially the initial phase, to minimize artifacts in the reconstructed image. In this work, we have completed the experimental operation of SIM, and image reconstruction based on different phase reconstruction algorithms. Firstly, we reviewed the development history of SIM, and systematically introduced the principle of SIM to achieve super-resolution imaging and the phase estimation algorithms. Then, we discussed the technical difficulties of the hardware setup, and built a dual-beam interference SIM system based on the ferroelectric liquid crystal spatial light modulator (FLC-SLM). Finally, we used different phase estimation algorithms to extract the initial phases of the collected images, and some comparable results are obtained.
With the development of various fluorescence technologies and optical control, fluorescence super-resolution microscopy has broken the limit of optical diffraction. Among them, the structural illumination microscopy (SIM), which combines structured light illumination and wide-field fluorescence imaging, uses structural illumination to mix in Fourier space to bring high-frequency information into the passband of the optical transfer function (OTF) to achieve super-resolution imaging. And having the advantages of weak phototoxicity and photobleaching, and fast imaging speed, SIM is currently one of the most mainstream techniques for super-resolution microscopy imaging of living cells. In this work, we have completed the theoretical simulation of SIM and the experimental operation. Firstly, we review the development of SIM, and systematically introduces its super-resolution imaging principle. Then, we discuss the technical difficulties of the hardware part, and builds a set of dual-beam interferometric SIM based on ferroelectric liquid crystal spatial light modulator, achieving that it only takes 270ms to collect 9 original images, and modulates the polarization characteristics of the illumination light to improve the interference fringe contrast and energy utilization. Finally, by using the open-source plugin Hifi-SIM to achieve image reconstruction, we obtain some ideal results.
Fringe projection profilometry (FPP) has been widely used in high-speed, dynamic, real-time three-dimensional (3D) shape measurements. How to recover the high-accuracy and high-precision 3D shape information by a single fringe pattern is our long-term goal in FPP. Traditional single-shot fringe projection measurement methods are difficult to achieve high-precision 3D shape measurement of isolated and complex surface objects due to the influence of object surface reflectivity and spectral aliasing. In order to break through the physical limits of the traditional methods, we apply deep convolutional neural networks to single-shot fringe projection profilometry. By combining physical models and data-driven, we demonstrate that the model generated by training an improved U-Net network can directly perform high-precision and unambiguous phase retrieval on a single-shot spatial frequency multiplexing composite fringe image while avoiding spectrum aliasing. Experiments show that our method can retrieve high-quality absolute 3D surfaces of objects only by projecting a single composite fringe image.
Three-dimensional (3D) imaging technology has been widely applied in various fields, such as intelligent manufacturing, online inspection, reverse engineering, cultural relic protection, etc. In this work, we present a high-accuracy real-time omnidirectional 3D scanning and inspection system based on fringe projection profilometry. Firstly, a multi-camera system based on geometric constraints is constructed to perform stereo phase unfolding without additional auxiliary projection images to ensure high-accuracy 3D data acquisition in real time. Then, we propose a rapid 3D point cloud registration approach combining simultaneous localization and mapping (SLAM) with iterative closest point (ICP) techniques to achieve alignment of point cloud slices with accuracy of up to 100 microns. Finally, a cycle-positioning-based registration scheme is developed to allow for real-time 360 degree 3D surface defect inspection. The experimental results show that our system is capable of real-time omnidirectional 3D modelling and real-time 360° defect detection.
Using a single fringe image to complete the dynamic absolute 3D reconstruction has become a tremendous challenge and an eternal pursuit for researchers. In fringe projection profilometry (FPP), although many methods can achieve high-precision 3D reconstruction from simple system architecture via appropriate encoding ways, they usually cannot retrieve the absolute 3D information of objects with complex surfaces through only a single fringe pattern. In this work, we develop a single-frame composite fringe encoding approach and use a deep convolutional neural network to retrieve the absolute phase of the object from this composite pattern end to- end. The proposed method can directly obtain spectrum-aliasing-free phase information and robust phase unwrapping from single-frame compound input through extensive data learning. Experiments have demonstrated that the proposed deep-learning-based approach can achieve absolute phase retrieval using a single image.
Fringe projection profilometry (FPP) has been more widely applied in fields such as intelligent manufacturing and medical plastic surgery. Recovering the three-dimensional (3D) surface of an object from a single frame image has always been the pursued goal in FPP. The color fringe projection method is one of the most potential technologies to realize single-shot 3D imaging because of the multi-channel multiplexing. Inspired by the recent success of deep learning technologies for phase analysis, we propose a novel single-shot 3D shape measurement approach named color deep learning profilometry (CDLP). Through `learning' on extensive data sets, the properly trained neural network can gradually `predict' the crosstalk-free high-quality absolute phase corresponding to the depth information of the object directly from a color fringe image. Experimental results demonstrate that our method can obtain accurate phase information acquisition and robust phase unwrapping without any complex pre/post-processing.
Eliminating the effective phase ambiguity with as few fringe patterns as possible is a huge challenge for fringe projection profilometry (FPP). The stereo phase unwrapping (SPU) technologies based on geometric constraints can achieve phase unwrapping without projecting any additional patterns, which maximizes the efficiency of the absolute phase retrieval. However, when high-frequency fringes are used, the phase ambiguities will increase which makes SPU unreliable. The adaptive depth constraint (ADC) method can increase the robustness of SPU, but it is difficult to deal with scenarios without the priori depth guidance. In this work, we propose a stereo phase unwrapping method based on feedback projection to robustly unwrap the wrapped phase of dense fringe images. Aiming at the problem that the ADC is too dependent on the last measurement result, a simple and effective deep anomaly detection strategy is proposed. After determining the reconstruction error, through the proposed fully automatic projection feedback mechanism, the absolute depth of the object can be quickly obtained to correct the dynamic depth range of the ADC, thus guiding the acquisition of high-quality 3D information. Experiments prove that this approach can achieve high-speed, real-time, and high-resolution 3D measurement with a measurement speed of 30 Hz under the premise of using two perspectives.
Ensuring high quality standards at a competitive cost through rapid and accurate industrial inspection is a great challenge in the eld of intelligent manufacturing. Three-dimensional (3D) optical quality inspection technologies are gradually widely applied for surface defect detection of complex workpieces because of its non-contact, high- accuracy, digitization and automation. However, the contradiction between cost and eciency, dependence on additional position hardware, and compromised detection strategies remain the urgent obstacles to overcome. In this work, we propose a fast 3D surface defect inspection approach based on fringe projection pro-lometry (FPP) for complex objects without any auxiliary equipment for position and orientation control. Firstly, a multi- view 3D measurement based on geometric constraints is employed to acquire high-accuracy depth information from dierent perspectives. Then, a cycle-positioning-based registration scheme with the establishment of the pose-information-matched 3D standard digital model is proposed to realize rapid alignment of the measured point cloud and the standard model. Finally, a minimum 3D distance search method is driven by a dual-thread processing mechanism for simultaneous scanning and detection to quantify and locate 3D surface defects in real time. To validate the proposed inspection approach, a software that combines 3D imaging, point cloud registration, and surface defect calculation is developed to perform quality inspections on complicated objects. The experimental results show that our method can accurately detect the 3D surface defect of the workpiece through more economical hardware and more convenient means in real time, which is of great signicance to intelligent manufacturing.
Fringe projection profilometry (FPP) has been widely applied in three-dimensional (3D) measurement owing to its high measurement accuracy and simple structure. In FPP, how to effectively recover the absolute phase, especially through a single image, has always been a huge challenge and eternal pursuit. The frequency-multiplex methods can maximize the efficiency of phase unwrapping by mixing the multi-frequency information used to eliminate phase ambiguity in the spectrum. However, spectrum aliasing and the resulting phase unwrapping errors are still pressing difficulties. Inspired by the successful application of deep learning in FPP, we propose a single-shot frequency multiplex fringe pattern for phase unwrapping approach using deep learning. Through extensive data learning, the properly trained neural networks can directly learn to obtain spectrum-aliasingfree phase information and robust phase unwrapping from single-frame compound input. Experimental results demonstrate that compared with convenient frequency-multiplex methods, our deep-learning-based approach can achieve more accurate and stable absolute phase retrieval.
As the digital projector develops, fringe projection profilometry has been widely used in the fast 3D measurement. However, the field of view of traditional 3D measurement systems is commonly in decimeters, which limits the 3D reconstruction accuracy to tens of microns. If we want to improve the accuracy further, we have to minimize the field of view and meanwhile increase the fringe density in space. For this purpose, we developed two kinds of systems based on a stereo-microscope and telecentric lenses, respectively. We also studied the corresponding calibration frameworks and developed fast 3D measurement methods with both Fourier transform and phase- shifting algorithms for real-time 3D reconstruction of micro-scale objects.
KEYWORDS: 3D modeling, Cameras, 3D acquisition, 3D metrology, Clouds, Image registration, 3D image processing, Imaging systems, Data modeling, Projection systems
Three-dimensional (3D) registration or matching is a crucial step in 3D model reconstruction. In this work, we develop a real-time 3D point cloud registration technology. Firstly, in order to achieve real-time 3D data acquisition, the stereo phase unwrapping method is utilized to eliminate the ambiguity of the wrapped phase, assisted with the depth constraint strategy without projecting any additional patterns or embedding any auxiliary signals. Then we implement SLAM-based coarse registration and ICP-based fine registration to match the point cloud data after the rapid identification of two-dimensional (2D) feature points. In order to improve the efficiency of 3D registration, the relative motion of the measured object at each coarse registration is quantified, through which only one fine registration is performed after several coarse registrations. The experiment shows that, the complex model can be registered in real time to reconstruct its whole 3D model with our method.
In fringe projection profilometry, using denser fringes can improve the measurement accuracy. In real-time measurement situations, the number of the fringe pattern is limited to reduce motion-induced errors, which, however, poses more difficulties for the absolute phase recovery from dense fringes. In this paper, we propose a stereo phase matching method that takes advantage of the high-accuracy of denser fringes and the high-efficiency of using only two different frequencies of fringes. The phase map is divided into several sub-areas and in each sub-area, the phase is unwrapped independently. The correct matched pixel is easily selected from the distributed candidates in different sub-area with the help of geometry constraints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.