PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12563, including the Title Page, Copyright information, Table of Contents, and Conference Committee listings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aiming at the problems of low contrast and blurry details of infrared images, a novel two-stream deep full convolutional neural network is proposed for low-quality infrared image enhancement. An infrared detail enhancement sub-network and a global content-invariant sub-network are designed to achieve adaptive enhancement of infrared features. First, the detail enhancement network composed of mixed attention block with multi-convolutions, residual learning and up-sampling unit is used to extract deep features from inputs, learn meaningful thermal radiation target information, restrain unnecessary background, and then perform the separation of target and background. Second, the content-invariant network mainly consisting of dilated convolutions and multi-scale convolutions captures rich contextual information to focus on the overall content, maintain the spatial structure, and avoid over-enhancement of local regions. Finally, the fine-tuning unit fuses the features extracted by the two-stream to complete the element complementation between different mappings and generate high-quality infrared images. Furthermore, experiments on public datasets and self-collected infrared datasets demonstrate that the proposed method outperforms other image enhancement methods not only for image quality on PSNR and SSIM, but also that has better visual quality with less artifact and noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we propose to utilize the artificial neural network (ANN) to realize the computing of the pulse performance of the linear cavity fiber laser. At the first, a four hidden layer ANN (called ANN1) is trained to judge whether a small noise pulse in the fiber cavity can evolve into a stable mode-locked pulse with different cavity parameters. ANN1 has an accuracy of 98.3% on the test data set and we use it to quickly calculate the pulse convergence region in the three-dimensional parameter space. Then, a three hidden layer ANN (called ANN2) is trained to calculate the output pulses shape of fiber laser, and its accuracy is verified. After that, based on ANN2 and genetic algorithm, we design a method to inverse deducing the laser parameters with known output pulse width. This algorithm has a small-time complexity. By repeating the genetic process, the accuracy of this algorithm will also be improved. The authors believe that the neural network model presented in this work is an efficient and universal means to study the dynamics of optical fibers and will have a great application prospect in future related work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As a kind of microalgae, Spirulina plays an important role in fish culture, food processing industry, medical treatment and bioenergetic development due to its reasonable nutritional composition and high hydrogenase activity. However, the purity of Spirulina , which could be significantly affected by virus infection and miscellaneous algal issues, has great impact on the quality of the product. Thus, periodic Spirulina detection is necessary for quality control of Spirulina culture. Currently, there are two main methods of Spirulina detection: the optical microscopic method and the fluorescence detection method. The former has higher accuracy and a lower speed while the latter has a higher speed in a sample destructing mode. Deep learning-based method has the ability to accelerate data processing. Meanwhile, it can achieve high accuracy by model training and validation. In this work, we have applied deep learning to Spirulina detection to achieve a higher accuracy rate. The process was divided into four main steps: Spirulina culture, image acquisition, image preprocessing and YOLO-v3 model training. The hyperparametric modulation was carried out to determine the appropriate training parameters, providing a trained model with mAP of 0.839 at a detection speed of 20.53 fps. It has great application potential in quantity detection and size detection of cultured Spirulina .
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The task of segmenting small infrared targets, which have few pixels and weak features, has been a difficult problem in the field of small target image processing. Small targets exist not only in general images, but also widely in UAV cameras, communication base station cameras, rescue cameras and vehicle cameras. The study of small target segmentation algorithms is very important for analyzing and utilizing these images, and has important applications in security, transportation, and rescue. Traditional small target segmentation algorithms are able to segment objects with simple target contour edges and large differences in signal strength. The traditional algorithm often has high false detection rate and missed detection rate when facing several targets with weak signal strength, and does not perform well in complex scenes. In this paper, we introduce an infrared small target segmentation scheme facing multiple types and numbers of targets. We also produce an infrared UAV and pedestrian dataset for validation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Feature extraction and matching of remote sensing images is becoming increasingly important with a wide range of applications. It matches and superimposes images obtained from the same scene at different times, different sensors, and different angles, and maps the optimal to the target image. CNN-based algorithms have shown superior expressiveness compared to traditional methods in almost all fields with image. This paper optimises the network based on SuperPoint by replacing convolution with a depth-separable convolution which has smaller number of parameters, and replacing the conv block with a spindle-type Inverted Residuals block composed of dimension expansion, depth-separable convolution and Dimension reduction. The network depth is fine-tuned to ensure accuracy. The model is trained on the RSSCN7 remote sensing dataset. Compared with other traditional algorithms in a cross-sectional manner with the combination of SuperGlue, the optimized algorithm shows the superior comprehensive performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, lung cancer has become one of the most lethal factors to human beings. Clinical data show that the probability of lung nodules developed into lung cancer is about 30%. Due to the lack of obvious symptoms, around 70% of lung cancer patients in China are in advanced stage of lung cancer when firstly diagnosed. Therefore, early identification of lung nodules is of great significance for early diagnosis and therapy. Currently, artificial intelligence has been widely used to generate predictive model of lung nodules by learning algorithms adapted to image characteristics, leading to improved accuracy and higher sensitivity of diagnosis of early lung cancer. In this work, Luna16 (lung nodule analysis 2016, containing a total of 888 low-dose chest Computed Tomography (CT) thin-slice plain scan lesions) were selected as the data set, providing a total of 1018 CT slices with the most representative shape of lung nodules in this analysis. Next, this project was performed on Baidu AI Studio platform, applying both U-Net and PSP Net to train a model of rapid detection of lung nodules. The training process generated a model providing a rapid and accurate identification of lung nodules larger than 3 mm in diameter. Results showed that the accuracy of U-Net was higher than that of PSP Net, indicating a high potential in further clinical diagnosis in lung cancer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The chromatic confocal technology (CCT) uses the dispersion principle to establish an accurate encoding relationship between the spatial position and the axial focus point of each wavelength to achieve non-contact measurement. The accuracy of the measurement results is affected by the peak wavelength extraction accuracy. The flexible and adaptable characteristics of machine learning techniques are used to model the spectral wavelength and light intensity nonlinearly, establish the response relationship between input wavelength and output normalized light intensity, and refit the spectral curve distribution. In this paper, we apply the network of regression aspect of machine learning, Extreme Learning Machine (ELM), Back Propagation Neural Network (BPNN), and Genetic Algorithm optimized Back Propagation Neural Network(GA-BPNN) to fit the spectral response of the system to accurately locate the peak wavelength and compare it with the traditional peak extraction methods of Gaussian fitting, polynomial fitting, and center of the mass method to verify that the machine learning method used is significantly better than the traditional peak extraction methods in terms of peak extraction accuracy. The ELM network is the best among the three networks, with a peak extraction error of only 0.04μm and a Root Mean Square Error(RMSE) of only 6.8×10-4. The analysis of calibration experiments, resolution, and stability experiments found that the ELM algorithm was found to have the shortest calculation time, and the system measurement resolution was explored through the ELM algorithm to be about 2μm. The research results of this paper have contributed to the improvement of the system measurement accuracy and measurement efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.