PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
John Greivenkamp,1 Jun Tanida,2 Yadong Jiang,3 HaiMei Gong,4 Jin Lu,5 Dong Liu6
1Wyant College of Optical Sciences (United States) 2Osaka Univ. (Japan) 3Univ. of Electronic Science and Technology of China (China) 4The Shanghai Institute of Technical Physics of the Chinese Academy of Sciences (China) 5Tianjin Jinhang Institute of Technical Physics (China) 6Zhejiang Univ. (China)
This PDF file contains the front matter associated with SPIE Proceedings Volume 11342, including the Title Page, Copyright Information, Table of Contents, Author and Conference Committee lists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In clinical testing, with the increasing demand for quantitative analysis of urine protein in departments such as urology, ICU (intensive care unit), a fast, accurate and simple method for detecting early renal impairment attract more and more scholars’ and businesses’ wide attention. This study proposes an early renal function damage detection device that combines ACR analysis and NGAL (neutrophil gelatinase-associated lipocalin) analysis, which could optimizes existing detection methods[1], and improves existing detection techniques. This study based on Lambert-Beer's law, used immuno-transmission colorimetry and immunoturbidimetric turbidity to analyze the samples and build experimental tooling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mode decomposition (MD) is essential to reveal the intrinsic mode properties of fiber beams. However, traditional numerical MD approaches are relatively time-consuming and sensitive to the initial values. To solve these problems, deep learning technique is introduced to perform non-iterative MD. In this paper, we focus on the real-time MD ability of the pre-trained convolutional neural network. The numerical simulation indicates that the averaged correlation between the reconstructed patterns and measured patterns is 0.9987 and the decomposing rate can reach about 125 Hz. As for the experimental case, the averaged correlation is 0.9719 and the decomposing rate is 29.9 Hz, which is restricted by the maximum frame rate of the CCD camera. The results of both simulation and experiment show the superb real-time ability of the deep learning-based MD methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavefront aberration, which caused by atmospheric turbulence, needs to be measured in the free space optical communication. The existing sensors of wavefront aberration measurement are mainly divided into two classes, wavefront sensors and image-based sensors. Wavefront sensors , such as Hartmann sensor and shearing interferometry, measure wavefront slope to calculate wavefront aberration. However, wavefront sensors always need most of the laser energy, which means it is hard to use wavefront sensors in free space optical communication in the daytime. Image-based sensors usually requires iteration, which means poor real-time and locally optimal solution. No existing method can measure wavefront aberrations in real time in free space optical communication in the daytime. In this article, a new method of measuring wavefront aberration with CNN is proposed, which can be used in free space optical communication in the daytime and have good real-time performance. We made some modifications in VGG to make it can be used to fitting the Zernike coefficients. The input to the network was the PSF of focal plane and defocus plane and the output was the initial estimate of the Zernike coefficients. 22000 pairs of images were collected in the experiment, which produced by liquid crystal and the wavefront was built by 64 Zernike coefficients when atmospheric coherent length(r0) is 5cm. 20000 pairs of images were used as training sets and the other were used as testing sets. The root-mean-square(RMS) wavefront errors of VGG is on average within 0.0487 waves and the time it needs is 11-12ms. We use RMS wavefront error less than 0.1 waves as the correct standard and the correct rate is 98.75% , while other RMS wavefront errors were properly close to 0.1 waves.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mid-spatial-frequency (MSF) error on optical surfaces can do great harm to high-performance laser systems. A non-interferometric way of measuring it is phase retrieval, which has already proved its effectiveness in previous studies. However, the performance of phase retrieval is limited by its long-time iterative process and relies heavily on reliable initial solution. Therefore, in this paper, we put forward a method for fast measurement of MSF error, by introducing advanced deep learning technique into traditional computational imaging methods. Results show that the proposed method simultaneously gains an improvement on convergence speed and a reduction on residual error. The proposed method takes much fewer iterations to converge to the same error level, and has much smaller average residual error than that of the conventional algorithm in the numerical experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Scanning Laser Ophthalmoscope (SLO) is an essential medical tool for diagnosis of retinal disease. It uses a small amount of laser to scan the retinal at high speed and transmits the fundus images to the video monitor for medical auxiliary diagnosis. However, like all optical imaging technologies, due to the interference of hardware equipment and external conditions, it is often not ideal imaging. In most clinical cases of laser ophthalmoscope, only low-resolution retinal images can be used to assist medical diagnosis. For this reason, we propose a new depth super-resolution method of retinal image based on laser scanning ophthalmoscope. The retinal image enhanced by local Laplacian operator is introduced into an efficient full convolution neural network. The convolution network uses Adam algorithm to replace the traditional SGD(Stochastic gradient descent) method, which runs faster and faster, and the reconstructed image effect is better. In this work, we subjectively evaluate our algorithm, apply it to real retinal images and compare it with several traditional super-resolution reconstruction methods. The experimental results show that this method has achieved good results in improving the overall quality of laser scanning ophthalmoscope image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the detection of small and weak defect targets in ink area of planar glass element of a mobile phone,using linear array camera with dark field illumination and line-by-line scanning imaging system will result in the size of image (30720 *16384) much larger than the size of small defect targets (3 pixels) . At the same time, because the stains are located in the ink area, the contrast between the defect target and the background of the ink area could be very low. This will lead to the weak defect targets in the ink area could not be detected quickly and effectively using the common target detection method. In order to solve this problem, a detection method of small and weak defects in the ink area of planar glass element based on self-correlation template matching and one-dimensional maximum entropy is proposed in this paper. Firstly, the large-scale image is first clipped, and the character information in the ink area is recognized and fixed position using the self-correlation template matching algorithm. The character and Logo information in the ink area are clipped according to the positioning results. Secondly, the processed image is clipped twice and binarized by the OTSU method. BLOB technology is used to select the largest white area as the ink area in the second clipping image. Thirdly, Sobel operator is used to detect the edge of the ink area, and the transitional area with the width of 100 pixels on the edge of the ink area is clipped, so the clipping image of the ink area which only contained valid small and weak defect targets is obtained. Finally, One-dimensional maximum entropy algorithm is used to separate the defect targets from real ink area, and the weak and small defect targets are recognized and detected by BLOB technology. The experimental results show that the method solves the problem of detecting the small and weak defects in the ink area of the planar glass element with fast recognition speed and high detection accuracy. It can be applied in the process of detecting the quality and cleanliness of planar glass element, and has great significance for improving the quality and efficiency of mobile phone production and assembly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computed tomography (CT) has been extensively used in nondestructive testing, medical diagnosis, etc. In the field of modern medicine, metal implants are widely used in people's daily life, and the serious artifacts in CT reconstruction images caused by metal implants cannot be ignored. Sinogram contains the most realistic projection information of patients. Processing in the sinogram domain directly can make the effective information maximum extent preserved. In this paper, we propose a novel method based on full convolutional network (FCN) for metal artifact reduction in the sinogram domain. The networks we introduced use the complete sinogram data to learn a mapping function to correct the metal-corrupted sinogram data. The network takes the metal-corrupted sinogram as the input and takes the artifact-free sinogram as the target. Compared with the existing deep learning-based CT artifact reduction methods, our work just uses the sinogram information to correct the metal artifacts. The proposed network can process images of different sizes. Our initial results on a simulated dataset to demonstrate the potential effectiveness of this new approach to suppressing artifacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thanks to the advantages of improving the focusing precision and reducing the energy loss in the beam focusing, complex and off-axis aspheric mirrors are widely used in the field of aviation, aerospace, national defense and other large optical systems. Ultra-precision grinding is an important technology to manufacturing large aperture aspheric optics in enormous quantities. In order to fabricate large aperture aspheric optics high efficiently and precisely, several key technologies when parallel grinding were proposed in this article. First, the computer aided programming system was developed, which could compute the coordinates of aspheric surface and diamond wheel when grinding and generate the CNC programs automatically, which can be directly executed by the grinder. On the premise of waviness controlling, the raster grinding trajectory was optimized to improve the material removal efficiency. To acquire the radius and form error of diamond wheel, the measurement of diamond wheel based on corkscrew spin trajectory was proposed, which could detect the 3-D geometric morphology of wheel. By precision tool setting using displacement sensor, the definitive position between wheel and element was established, which avoided the error correction in subsequent grinding process. Through on-machine measurement using non-contact displacement sensor, the 3-D form error of optics was acquired, which was combined with the theoretical coordinates of aspheric to compensation grinding. In the end the grinding experiment was carried out. The material removal rate of rough grinding, semi-fine grinding and fine grinding were about 520mm3/s, 26 mm3/s and 1.6 mm3/s, respectively. The P-V of form error after fine grinding was about 3.21μm. The destination of highly active and ultra-precision grinding of large aperture and complex aspheric lens was achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A panoramic surveillance system is designed to achieve continuous monitoring of the surrounding environment. The image acquisition module of the system is composed of five fixed-focal-length cameras and one variable-focal-length camera, which realizes 360 degree environmental surveillance. An adaptive threshold is used to dynamically update the background template in order to better accommodate various weather changes. Further, a pixel-level video moving target detection algorithm is applied to effectively detect whether an intruding target exists and determine the direction of the target. It shows the advantages of less computation and preferable detection accuracy. Once an intrusive target is found, the deep convolution neural network SSD is employed to recognize the specific target quickly. As common sense, visual object tracking is one of the most attractive issue in computer vision. Recently, deep neural network has been widely developed in object tracking and shown great achievement. Here, we propose an end-to-end lightweight siamese convolution neural network to achieve fast and robust target tracking. The experiment result shows panoramic surveillance system can effectively and robustly perform security tasks such as panoramic imaging, target recognition and fast target tracking. At the same time, the deep convolution neural network can recognize and track the target accurately and quickly, which meets the real-time and accuracy requirements of practical task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data imbalance is a common problem in hyperspectral image classification. The imbalanced hyperspectral data will seriously affect the final classification performance. To address this problem, this paper proposes a novel solution based on oversampling method and convolutional neural network. The solution is implemented in two steps. Firstly, SMOTE(Synthetic Minority Oversampling Technique) is used to enhance the data of minority classes. In the minority classes, SMOTE method is used to generate new artificial samples, and then the new artificial samples are added to the minority classes, so that all classes in the training dataset can reach to the balanced distribution. Secondly, According to the data characteristics of hyperspectral image, a convolutional neural network is constructed for classifying the hyperspectral image. The balanced training data set is used to train the convolutional neural network. We experimented with the proposed solution on the Indian Pines, Pavia University dataset. The experimental results show that the proposed solution can effectively solve the problem of imbalanced hyperspectral data and improve the classification performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Small scale waviness of aspheric surface inevitably occurs when grinding aspheric surface with grating parallel grinding technology, so aiming at the problem of waviness amplitude and uniformity, this paper theoretically analyses the relationship between grinding processing parameters and aspheric waviness, and designs a single factor experiment to verify the influence of grinding processing parameters on aspheric surface waviness. The processing parameters are determined to minimize the waviness amplitude. Considering the problem of uniformity of waviness, according to the influence of grinding force on uniformity of aspheric waviness in grinding process, down-grinding grating parallel grinding method and up-grinding grating parallel grinding method are used. Experiments verify that down-grinding grating parallel grinding method is the best method to get most uniformity small-scale waviness of aspheric surface. The minimum amplitude is 0.5μm~1.5μm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The method of computer color matching gradually replaces the artificial color matching method of giving pigment formula by experience.According to the calculation method of the cie color system, the color of the object can be measured by the instrument.The basic color of computer color matching and its chromaticity value play a decisive role in the efficiency and quality of computer color matching.However, the spectral characteristics and chromaticity values of the basic colors of computer color matching are easily affected by many factors, and the stability of chromaticity values is not strong.This topic intends to study the spectral properties of a large number of pigment samples and calculate the color value, select the red, yellow and blue base pigments that meet the camouflage requirements, and then study the factors that may cause the color chromaticity value of the camouflage. Within the required range, the camouflage base color and its chromaticity value are determined, which lays a foundation for constructing a color database related to the color matching and chromaticity of the computer color matching.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to more accurately locate and segment the varistor image to achieve the varistor image data set necessary for automatic construction of deep learning. This paper proposes a method for locating and stitching the body and stitch of varistor based on Hough transform and mathematical morphology. In order to obtain an image that eliminates surface reflection, the method first acquires a varistor image through a coaxial light source. Secondly, performing preprocessing on the image based on denoising, graying, and binarization; then, using the Hough transform based on circle detection to locate the body of the resistor; further separating the body and the stitches, firstly performing edge searching on the positioned body portion, and then performing background filling on the inside of the body, and finally using a mathematical morphology etching operation to eliminate the edge marks of the body to obtain the positioning of the stitches. The experiment aimed to locate and segment 91 varistor samples, and use the effective and correct data indicators to evaluate the segmentation results. The experimental results show that the actual results of the proposed method are ideal and have a good target segmentation effect, which is beneficial to provide reliable varistor image data sets necessary for deep learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image super-resolution refers to the technique of reconstructing a high-resolution image by processing one or more complementary low-resolution images. It is widely used in medical imaging, video surveillance, remote sensing imaging and other fields. The learning-based super-resolution algorithm obtains a mapping relationship between the highresolution image and the low-resolution image by learning, and then guides the generation of the high-resolution image according to the obtained mapping relationship. The generative adversarial network (GAN) is composed of a generative network model and a discriminator network model, and the two play each other until the Nash equilibrium is reached, and the texture information and the high-frequency details of the downsampled image can be restored based on the super-resolution method of generative adversarial network. However, super-resolution algorithms based on generative adversarial network can only be used for one kind of magnification time, and the versatility is insufficient. Despite convolutional neural networks has achieved breakthroughs in accuracy and speed of traditional single-frame superresolution reconstruction, and can achieve a higher peak signal-to-noise ratio (PSNR). Most of them use Mean Square Error (MSE) as the minimum optimization objective function, so although a higher peak signal-to-noise ratio can be achieved, when the image downsampling factors is higher, the reconstructed image will be too smooth, lack highfrequency details and perceptually unsatisfy in the sense that they fail to match the fidelity expected at the higher resolution. When dealing with complex data of real scenes, the model's representation ability is not high; and the generative adversarial network training is very unstable, seriously affecting the model training process. This paper is based on generative adversarial network, improving the network structure and optimizing the training method to improve the quality of generating images. The following improvements have been made to the generator model: the multi-level structure is used to enlarge the image step by step, so that the model can simultaneously generate multiple images with a larger scale, and also ensure that the image obtained at a larger magnification has higher quality; ResNet model is improved by recursive learning and residual learning, and the batch normalization structure in the model is removed. On the basis of ensuring the image quality, the efficiency of the model is effectively improved. The recursive and residual learning methods can effectively improve the feature expression ability of the model, and thus significantly improve the quality of the generated image. The Expand-Squeeze method is proposed to generate images. The basic idea is to expand the dimension of the last layer of the convolution layer of the model. In this way, more context information is obtained, and then the image is generated by using the 1x1 convolution kernel. The Expand-Squeeze method can effectively reduce the checkerboard effect and improve the quality of the generated image to some extent. This paper improves the discriminator network loss function. Measure the similarity between generated image and real image by introducing Wasserstein distance. The loss function proposed consists of two parts: the loss function of resistance and the loss function of content. The experimental results verify that the improved generation of the generative adversarial network can effectively improve the quality of the generated image and effectively improve the stability of the model training.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The corrosion of optical glass has become the main factor affecting the qualified rate of products, so it is urgent to solve the corrosion of products. It has been found that under the same cleaning conditions, corrosion mostly occurs on the small side, that is, the first side of polishing, while the other side is not corroded. This indicates that cleaning has little effect on the corrosion of fiberboard. Corrosion occurs in polishing and the process of upper and lower discs. According to the different conditions of two-sided processing, the causes and influencing factors of corrosion were analyzed. By refining the processing technology and adjusting the proportions of protective paint, turning temperature and water stain and other process parameters to achieve the goal of effective controlling the corrosion within 2%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, small and weak target detection technology is one of the hotspots in information processing technology. However, the detection precision and speed of weak targets still have yet to be improved.
As a branch of machine learning, deep learning has become more and more widely used in various fields. Therefore, this paper improves the deep convolutional networks for the characteristics of weak target detection, including the following three aspects:
Firstly, a dataset dedicated to small and weak target detection is established. The data is sufficient and representative, which is beneficial to improve the quality of the network model. Each image in the dataset has a corresponding label that indicates the name of the image, and the coordinates and width of the target circumscribed rectangle.
Secondly, the image is dilated many times so that the target having only a few pixels is covered by a lot of pixels. The highlighted portion of the image is dilated, and the result image has a larger highlighted area than the original image.
Thirdly, the Faster R CNN algorithm is improved. In this paper, by adjusting the learning rates, a suitable one is determined to get the best network model.
The results show that the average precision on the dataset has improved. The method proposed in this paper is of great significance for the detection of small and weak targets. For the military field, the research on weak target detection has high military value for improving early warning capability and counterattack capability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Kalman filtering is a filtering method based on minimum mean square error. It is a filtering algorithm formed by the state equation of the system, the observation equation and the statistical characteristics of the process noise of the system. It is widely used in the field of target tracking navigation guidance, etc. The Kalman filter requires an accurate state model of the known system, so it has great limitations in practical applications. Because Neural Networks have strong nonlinear mapping capabilities. In this paper, a variety of motion models are selected for reference and simulated by Matlab. The simulation results show that the prediction effect of the filter optimized by neural network is better than that of ordinary Kalman filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The integrated and coordinated operation of warships and aircraft is the core issue of air-sea battle control. Ship antiaircraft and carrier-based aircraft are vigilant in different airspaces. In the face of uncertain threats, the defense effects of ships and aircraft are different, and in order to avoid the overlap between ship firepower and aircraft firepower, The assignment of goals becomes very important. Based on the zoning principle of warship and aircraft cooperative defense, this paper proposes a target attack time calculation method for combat situation. By calculating the estimated time of the attack and threat of the ship and the aircraft at the current position as the estimated interception time, factors such as the maneuver time of the aircraft, the flight time of the weapon, and the interception area need to be considered. Based on this method, the target grouping algorithm with minimum target intercept time is given. According to the algorithm, the warship-to-aircraft cooperative air defense is realized. The Monte Carlo simulation analysis is carried out on the two combat situations in the dynamic combat simulation platform. By comparing the simulation results, we find that the proposed algorithm can effectively reduce the damage rate of the warship.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the deterioration of battlefield environment, joint combat will become an important mode of operation in the future. In this paper, the Department of Defense Architecture Framework (DoDAF) is studied, the architecture framework design ideas and specific architecture design steps of DoDAF are given, the architecture views involved are described. This paper presents a DoDAF-based modeling method for air-sea joint combat system architecture, which can better describe the operational tasks, information relations among constituent nodes and sub-systems of the complex integrated air-sea joint operation system. This method is often used in the early development stage of complex systems, and enhances consistency understanding between application personnel and system designers and developers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Online Multi-Object Tracking (MOT) has wide applications in time-critical video analysis scenarios, such as robot navigation1 and autonomous driving2. In tracking by-detection, a major challenge of online MOT is how to robustly associate noisy object detections on a new video frame with previously tracked objects. This paper aims to build technology that can track a movement of people via surveillance cameras that are located in stores, but not only (theoretically, the algorithm may be applicable to the location of the camera at any premises). The algorithm works with a variety of camera angles that allows. The main innovation of the paper is that algorithm SORT has been updated to consider the difference between datasets used on competitions and the real ones. The difference is that recognition is not perfect in data created by the program. People’s contours may be of different size (rectangles corresponding to the same man may differ twice) and some of them may be not recognized. The new metric of proximity called “soft-iou” has been introduced in SORT. We have achieved the accuracy of 95% for the daily number of visitors for one of jewelry retail chains. This level of accuracy allows applying the algorithm in different areas: not only retail stores, but also shopping centers, sports events, performances, traffic in public transport, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The optic disc is the origin of the optic nerve, which is considered to be one of the main structures of the retina. In many automatic resection algorithms for retinal anatomy and lesions, optic disc detection is a key pre-processing component and a relevant module for most retinal lesion screening systems. The method based on Simple linear iterative clustering(SLIC) superpixel segmentation of the fundus optic disc studied in this paper is mainly to better achieve the detection of hard exudation of diabetic retinal images. Because the color of diabetic retinopathy is similar to the fundus exudate, the optic disc as a false alarm source is often considered as one or more exudate candidate areas. Therefore, correct positioning and segmentation of the optic disc can improve the accuracy of the detection of exudate candidate regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The result of object detection based on deep learning may have errors or omissions due to the occlusion and background in object detection, which is an intractable problem. An effective method of improving object detection performance using multiple viewpoint images are proposed. By performing feature point matching on objects in the overlap between different views, groups of points with semantic information can be obtained. These point groups can be used to generate new detection boxes, which can correct error ones in the raw results. Experiments show that the proposed method is a viable solution, the recall is significantly improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stereo matching is one of the most important computer vision tasks. Several methods can be used to compute a matching cost of two pictures. This paper proposes a method that uses convolutional neural networks to compute the matching cost. The network architecture is described as well as teaching process. The matching cost metric based on the result of neural network is applied to base method which uses support points grid (ELAS). The proposed method was tested on Middlebury benchmark images and showed an accuracy improvement compared to the base method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Small object detection in complex scene is a difficult task in image processing. Normally, small objects in images are also weak objects where the contrast between targets and background is so subtle which makes it difficult to perceive. SSD(Single Shot MultiBox Detector) is one of the object detection method proved to be effective for normal size object detection, otherwise, unable to handle the small target task. A new small object detection method based on SSD is brought up in this paper. At first, a local maxima detector is performed in the image to obtain local maxima points in the image, which would be considered as the center of prior boxes for the subsequent object detection in feature maps of different levels. Secondly, the object extraction would be performed in con2_2 and conv3_3, that assures small objects does not be disappear in high level feature maps. Finally, a method to mark small object is brought up. The proposed method is performed in in several videos, which prove this method is feasible and effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the problems of long iteration time and poor image quality in the traditional infrared multispectral image reconstruction method based on compressed sensing(CS), an auto-encoders network based on residuals is proposed. Autoencoders are unsupervised neural networks where the output and input layers share the same number of nodes, and which can reconstruct its own inputs through encoder and decoder functions. using code decoding technique learn from real infrared multispectral image spectrum information, through the fast image reconstruction of auto-encoder, get high quality image. The performance of the method is verified by using multiple infrared multispectral images. The results show that the method has the advantages of high image processing efficiency and high spatial resolution. Compared with the traditional compressed sensing method, the auto-encoder network based on residuals has better effect on infrared multispectral image reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to compensate for the low spatial resolution of laser illumination imaging system due to the single photon detector with small number of pixels. In order to solve this problem, we demonstrated a laser illumination imaging system with compressed coded and introduced the application of deep learning in compressed sensing (CS) image reconstruction based on residual network. Specifically, by considering the priori information of sparsity, the better imaging results with much higher resolution could be obtained with a small amount of observation data. The digital micro-mirror device (DMD) is used to achieve sparse coding in this work. We designed to use two detectors to collect information in two reflection directions of DMD, which can reduce samples by 50%. In addition, considering that the time complexity of traditional CS reconstruction methods is too high, so we introduced CS reconstruction method based on residual network into our work, and did the simulation experiments with our data. According to the experimental results, our method performed better at the perspective of image quality evaluation index PSNR and consumption time in reconstruction process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Infared ship recognition has many applications in port supervision and management. However, when the imaging distance is long or the target changes are obvious, it is difficult to achieve accurate detection and recognition by traditional methods. In this paper, we designed a single step cascade neural network that consists of three parts: feature extraction module, scale transform module and classification regression module. Firstly, the VGG network is used to extract the different level features of the target images. Then the scale transform module is used to fuse the high-level features and the low-level features to reflect the semantic information and shallow information of the targets more completely. The generated region of interest is input to classification regression module that predicts the targets location and classes. The main contribution of this paper is to combine the specific problems of infrared polymorphic ships detection and recognition. The clustering algorithm is used to generate the appropriate anchors to adapt our targets, and the attention mechanism is introduced into the model training process. Compared with the traditional detection and recognition methods, the proposed single step cascade neural network achieves the better average precision in polymorphic ships.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection of chemical plumes is a challenging task in the field of infrared image detection due to the diffusivity of gas plumes. As a general-purpose segmentation architecture, Mask R-CNN can output high-quality instance segmentation masks while efficiently detecting gases. However, Mask R-CNN cannot achieve accurate segmentation of deformable targets. Therefore, in this paper, an infrared image gas plume detection method based on the attention mechanism Mask R-CNN is proposed, which can effectively detect the gas plume in the image and segment the infrared image. First, the preprocessed image is imported into Feature Pyramid Networks (FPN) to obtain the corresponding feature map. Second, the feature map is sent to the regional offer network (RPN) to obtain candidate RoIs. Then, a ROI Align operation is performed on the candidate ROI. Finally, these ROIs are classified, Bounding-box regression, and Mask generation. And we attach the edge attention mechanism to the mask branch of Mask R-CNN to improve the detection accuracy. The experimental results show that the method is validated on the real infrared gas images, and competitive results with the prior art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, China's UAS industry has developed rapidly, and the application of UAS is more and more extensive, but the safety of UAS is still prominent. Based on the analysis of UAS production and usage and UAS safety supervision situation, this paper analyses the existing problems of UAV safety management and control in China, and puts forward that we should speed up the system construction, strengthen flight control, guide the combination of industry supervision and local control, establish a unified supervision platform, and promote the healthy and standardized development of UAS industry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.