PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 11913, including the Title Page, Copyright information, and Table of Contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The incidence of colon cancer has shown an upward trend in recent years, and the appearance of colon polyps is one of the signs of colon cancer. The detection and segmentation of colon polyps are one of the doctors' auxiliary diagnostic methods. However, the increasing number of model parameters and inference memory requirements make the engineering of polyp segmentation models a challenging task. In this paper, an efficient polyp segmentation model based on Unet and RNNPool named RP-Unet is proposed. The first two blocks consisted of two convolutional and max pooling layers in Unet are replaced with the proposed RNNPool Down and Fuse (RDF) modules to rapidly downsample and fuse the input feature maps, and they also provide feature maps for skip connection. The last two blocks in the encoder are replaced with the proposed Double Convolution with Residual connection and RNNPool (DCRR) modules, in which the convolution layers are residually connected, and the max pooling layer is replaced directly with RNNPool. In the two proposed modules, up mapping and channel mapping are used to strengthen feature propagation by mapping activation maps logically instead of allocating unnecessary memory. The proposed RP-Unet is evaluated on two polyp segmentation datasets, and experiments show that the peak inference memory is reduced by almost 22%, while the segmentation accuracy is not significantly reduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Infrared and visible image registration of substation equipment is of great significance for power equipment detection and fault diagnosis. The scene of substation is complex, and the background of equipment image is usually messy, and the feature points of visible image are easy to fall on the background. The metal has good thermal conductivity, and its temperature is close to the ambient temperature. The metal part in the infrared image with metal tower as the background can not be clearly displayed, which is easy to cause the image mismatch or even unable to match. The existing registration methods such as SIFT, SURF and ASIFT are difficult to effectively solve this kind of image registration problem of substation equipment with complex background. To solve this problem, this paper proposes an infrared and visible image registration algorithm based on Multi-scale Retinex and ASIFT features. Firstly, the Multi-scale Retinex algorithm is used to separate the components representing the properties of the object in the visible image, so as to weaken the influence of the clutter background. Then, the ASIFT algorithm is used to do affine transformation to simulate the affine deformation under all parallax, and the feature points are roughly matched Finally, the random sampling consistent algorithm is added to eliminate the mismatching points. Experimental results show that the algorithm can increase the number of matching points by at least 4 times, the average matching accuracy is improved by 13%, and the average matching time is shortened by 183ms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to accurately obtain the status information of substation equipment, a large number of infrared and visible images will be used in the process of equipment maintenance. Traditional image fusion methods often lose the temperature information of the image, resulting in low brightness and contrast in the fusion image; while deep learning fusion algorithm will lose some details. Therefore, this paper proposes an infrared and visible light fusion algorithm based on NSCT and Siamese network to improve the quality of fusion image. Firstly, the infrared and visible images are decomposed by NSCT; the high-frequency part and low-frequency part are fused by the fusion rule of guided filtering, and the new high-frequency subband coefficient FH and the new low-frequency subband FL are obtained; then the first fusion image is obtained by NSCT reconstruction of FH and FL; after that, the weight mapping image of the first fusion image and the infrared image is obtained by convolution network, and at the same time Laplacian pyramid is used to decompose the primary fusion image and infrared image, and Gaussian pyramid is used to decompose the weight map; finally, the primary fusion image subband, infrared image subband and weight map image subband are fused according to the local window energy fusion method, and the final image is reconstructed by Laplacian pyramid. Experiments show that the subjective and objective indicators of the fusion picture are all improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of the study is to analyze whether certain components can be extracted in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) for the classification of prostate cancer (PCa). Nonnegative matrix factorization (NMF) was used to extract the characteristic curve from DCE-MRI. The peak sharpness of the characteristic curve was evaluated to classify prostates with and without PCa. Results showed that the peak sharpness of the characteristic curve was significantly different in prostates with and without PCa (p = 0.008) and the area under the receiver operating characteristic curve was 0.86 ± 0.08. We conclude that the NMF can decompose DCE-MRI into components and the peak sharpness of the characteristic curve has the promise to classify prostates with and without PCa accurately.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As Video synthetic aperture radar (SAR) technology has been developing rapidly in recent years, moving target detection and tracking has gradually become a research hotspot in the field of SAR. Since moving targets in Video SAR produce relatively clear shadows at their real locations, the shadow-based approach provides a new method for ground moving target detection. In this paper, a new approach based on image fusion enhancement is proposed to improve the extraction effect of target shadow in single frame Video SAR image, and the process of shadow segmentation is studied accordingly. First, we use Median Filter to denoise the image, and then use a variety of image enhancement methods to improve the contrast between shadows and background, including piecewise linear stretching, histogram specification, and S-curve enhancement, then use adaptive threshold segmentation algorithm to realize the separation of background and target shadow, finally use morphological processing method to further highlight the target shadow. The effectiveness of the proposed approach is verified on the Video SAR dataset published by Sandia Lab.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image classification technology is the most basic and important technical branch of computer vision. How to effectively extract effective information from images has become more and more urgent. First, we use the self-attention module to use the correlation between the features to weight and sum the features to get the image category. The self-attention mechanism is simpler to calculate, which greatly reduces the complexity of the model. Secondly, we have also made an optimization strategy for the complex CNN (Convolutional Neural Network) model. This article uses the global average pooling method to replace the fully connected method, which reduces the complexity of the model and generates fewer features. Finally, we verified the feasibility and effectiveness of our model on two data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At certain stages of the graphics pipeline, and most notably during compression for transmission and storage, triangle meshes may undergo a fixed-point arithmetic quantisation of their vertex coordinates. This paper presents the results of a psychophysical experiment, where discrimination thresholds between the original unquantised triangle meshes, and the quantised at various levels of quantisation versions of them, were estimated. The experiment had a two-alternative forced choice design. Our results show that the amount of geometric information of a mesh, as measured by its filesize after compression, correlates with the discrimination threshold. On the other hand, we did not find any correlation between the discrimination thresholds and the quality of the underlying meshing, as measured by the mean aspect ration of the mesh triangles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ocean wave is inexhaustible energy resource. If harnessed efficiently, it could be an ideal source of energy. Though, the possibility is limitless, it comes with insurmountable dangers especially when surrounded by large bodies of water. This study focused on the design and construction of a Local Instrument with Geo-Tagging for Area Storm Surges (LIGTASS): A Detection and Monitoring System for Marine Vessels. Utilizing the multi-point absorber design, a type of Wave Energy Converter (WEC) in creating a device that detects and monitors storm surges. It aims to give aid to the needs of small-scale fishermen regarding the absence of weather information, security and power availability while being offshore of the coasts. The device measures three (3) parameters which are, (i) barometric Pressure or atmospheric pressure, (ii) wave Height and (iii) wind speed at the same time generating its own power from the ocean waves. It includes Geo-Tagging feature for device traceability and geo-mapping. Real-time data are stored via LTE to a cloud database using Arduino Uno modules. The results show that the parameters given above suffice to predict and detect storm surge occurrences based from the standards given by Department of Science and Technology-The Philippine Atmospheric, Geophysical and Astronomical Services Administration (DOST-PAGASA). The output data will not replace any DOST-PAGASA declarations. All data are subjected to T-test for statistical analysis. The data are only viable to the area of its deployment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Least squares regression (LSR)-based classifiers are effective in multi-classification tasks. For hyperspectral image (HSI) classification, the spatial structure information usually helps to improve the performance, however, most existing LSRbased methods use the spectral-vector as input which ignore the important correlations in the spatial domain. To solve the drawback, a tensor-patch-based discriminative marginalized least squares regression (TPDMLSR) is proposed to modify discriminative marginalized least squares regression (DMLSR) with consideration of inter-class separability by employing the region covariance matrix (RCM). RCM is adopted to exploit a region of interest around each hyperspectral pixel to characterize the intrinsic spatial geometric structure of HSI. Specifically, TPDMLSR not only maintains the ascendancy of DMLSR, but also preserves the spatial-spectral structure and enhances the ability of class discrimination for regression by learning the tensor-patch manifold term with a new region covariance descriptor and measuring the inter-class similarity more accurately. The experimental results on membranous nephropathy (MN) dataset validate that TPDMLSR significantly outperforms LSR-based methods reflected in sensitivity, overall accuracy (OA), average accuracy (AA) and Kappa coefficient (Kappa).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In air traffic flow management system, more attention of flight trajectory prediction is paid to the passing time and altitudes on some report points. For this purpose, a KNN based method using both flight plan and radar trajectory data is proposed in this paper. This method takes radar trajectory data to search for the neighbors of the query trajectory, and then takes the corresponding flight plan data to predict the report-point-conditioned time and altitudes. The experiments on actual flight data verify that the proposed method is able to predict flight point-conditioned time and altitudes accurately.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the natural language processing task.We need to extract information from the tree topology. Sentence structure can be achieved by the dependency tree or constituency tree structure to represent.The LSTM can handle sequential information (equivalent to a sequential list), but not tree-structured data.Multi-headed self-attention is used in this model. The main purpose of this model is to reduce the computation and improve the parallel efficiency without damaging the effect of the model.Eliminates the CNN and RNN respectively corresponding to the large amount of calculation, parameter and unable to the disadvantage of parallel computing,keep parallel computing and long distance information.The model combines multi-headed self-attention and tree-LSTM, and uses maxout neurons in the output position.The accuracy of the model on SST was 89%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Errors exist in extracted Chinese handwritings even importing language models because of casualness and diversity of handwriting input, which would also affect the accuracy of recognition. Chinese handwritings cannot be converted into encoded texts until extracted and recognized correctly. Extracted handwritings may contain wrong language types, symbols, words, and word pairs. The conventional approach is based on context to adaptively correct theses errors. However, each writing character extraction candidates are fully visualized in bounding boxes, the overlaps of which bring more cognitive burden. Furthermore, the operation gesture needs to be accurate to stroke-level in convention that reduces the efficiency of correction. Therefore, an improved approach of error-correcting is proposed that an adaptive visualization as correcting reference and gesture analysis are taken into consideration. Experiments using real-life Chinese handwritings are conducted and compared the proposed approach with others. Experimental results demonstrate that the proposed approach is effective and robust.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a generalized covariance union(GCU) approach to solve the distributed fusion problem of sensors with different fields of view (FoVs). It uses the fusion results within the intersection of the FoVs (IoF)to estimate the(target positioning) measurement error, and then employs this estimated error to correct the multitarget densities outside the IoF. Compared with the current approach, GCU approach is more robust to the sensor-related measurement error. Simulation experiments verified the effectiveness of the proposed approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Philippines used to be one of the prime exporters of coffee from different parts of the world. However due to lack of technology and the absence of standard the production and exportation of coffee diminish through the years. Until now, the coffee farmers are relying on manual operations of classifying and profiling coffee beans intended to level and match the global standard. Hence, the researchers created a system that will automatically classify and profile coffee beans without human intervention based on the different features of coffee beans using integrated image processing algorithms. The focus of this research is to create a device that can evaluate the size, quality, and roast level of a batch of the coffee beans through the use of image processing techniques and Back Propagation Neural Network. To determine these features, BPNN would serve as the method to develop the brain of the device. The integrated processing algorithms used in this research include K-mean shift, Blob, and Canny Edge to extract the features of the coffee beans and Red Green Blue Analysis, Hu's Moment, and Blob Analysis to make use of these features and feed it into the BPNN. Based on the standard set by the Philippine Coffee Board Inc., the prototype in this research was able to classify and profile different coffee beans with up to 100% accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the core component of the photovoltaic system, the quality of solar cells determines the conversion efficiency of electric energy. Some strategies have been proposed to detect the crack of solar cells, but most of them can not detect the crack efficiently. This paper proposed a new two-stage method for microcrack detection in polycrystalline images based on contrastive learning. First, the input picture without a label is learned by SimCLR to obtain the representation of the image. In the second stage, the linear classifier is trained based on the fixed encoder and the representation. In the comparative experiment, unsupervised contrastive learning is compared with cross-entropy training and supervised contrastive learning. The experimental results show that the linear classifier trained on unsupervised representation achieves a top-1 accuracy of 78.39%, which is 7.42% higher than the supervised contrastive learning method, compared with supervised learning, the results are comparable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent years, deep neural networks have achieved impressive progress in object detection. However, detecting the interactions between objects is still challenging. Many researchers pay attention to human-object interaction (HOI) detection as a basic task in detailed scene understanding. Most conventional HOI detectors are in a two-stage manner and usually slow in inference. One-stage methods for direct parallel detection of HOI triples breaks through the limitation of object detection, but the extracted features are still insufficient. To overcome these drawbacks above, we propose an improved one-stage HOI detection approach, in which attention aggregation module and dynamic point matching strategy play key roles. The attention aggregation enhances the semantic expression ability of interaction points explicitly by aggregating contextually important information, while the matching strategy can filter the negative HOI pairs effectively in the inference stage. Extensive experiments on two challenging HOI detection benchmarks: VCOCO and HICO-DET show that our method achieves considerable performance compared to state-of-the-art performance without any additional human pose and language features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In view of the complex surface condition of printing roller, a defect salience algorithm based on global contrast and edge gradient is proposed. This algorithm uses gamma transform to adjust the overall brightness of the image, and then obtains a salience map by LC algorithm; at the same time, Canny edge detection is performed on the initial image, and then morphological operation is performed to obtain another salience map. Finally, image fusion algorithm is used to fuse the images obtained by the two algorithms to get the final defect salience map and complete the salience detection.The experimental results show that the algorithm has high recognition rate and accuracy, which can meet the needs of surface defect detection of printing roller.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wagyu beef originated in Japan. There are many types of Wagyu beef in the market around the globe, though. Primary sources may include Australia, the United States of America, Canada, and the United Kingdom. The authentic Japanese Wagyu is well known for its intense marbling, juicy rich flavor, and tenderness. And there are differences in flavor, texture, and quality between the different types of Wagyu. Nowadays, there is a growing interest in deep learning as a remarkable solution for several domain problems such as computer vision and image classification. In this study, we thus present an AI-based approach to identifying Wagyu beef sources with image classification. A deep neural network, CNN, was constructed to detect the marbled fat patterns of two sources, Japanese Wagyu and Australian Wagyu. The images were collected from reliable sources on the internet and augmented with DCGAN. The prediction of Wagyu sources achieved high accuracy of 95%. The learning model of Convolutional Neural Networks was found to be a promising method for the rapid characterization of the unique patterns of marbled fat layers. The classifier would benefit the customers for buying what they expect from the products in terms of quality and taste.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Simple linear iterative clustering (SLIC) is a fast and effective method for superpixel segmentation. However, the similarity measurement method of typical SLIC based on spatial and spectral features fails to get precise segmentation boundaries, especially for the images with complex and irregular shapes. To address this issue, a modified SLIC (MSLIC) method based on spectral, color, and texture information is proposed for medical hyperspectral cell images. The Gabor filter is used to exploit detailed texture features, which processes the image by using signal Fourier transform in the frequency domain. The MSLIC employs normalization, Gamma correction, and principal component analysis (PCA) to preprocess medical hyperspectral images, in which the texture features are integrated with spectral and spatial features to measure the distance. The under-segmentation error and boundary recall are used as the criterion of segmentation. Experiments for two medical datasets indicate that MSLIC achieves better segmentation performance than the typical SLIC method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Given the complex spatial structure of urban streets, we use two deep semantic segmentation methods with highprecision to model with street view image data. Through segmentation and quantization, we obtain depth semantic segmentation prediction maps and realize pixel-level classification of multi-objects in the image in a global sense. To accurately and effectively evaluate the urban environmental air quality which is closely related to residents' health, the category target objects related to the predicted pollutant concentration in the image are established as eight categories. The segmentation results are combined with the gas quality data collected by the mobile machine to predict, which can give a set of air pollutant concentration prediction scheme for city management personnel for reference. In this study, a semantic segmentation network is adopted to extract the main environmental factors from street view images as feature vectors of gas prediction models. All the image data used in the experiment were collected in Augsburg, Germany. The sampling tool was a pinhole camera installed on a mobile trolley and set to capture an image every ten seconds. The experiment produced various environmental factors, then input them into the prediction model by combining with the air measurement data of the street view for pollutant prediction. This method can be used as a reference path for evaluating urban environmental quality, air indicators, and air pollutant concentrations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color constancy usually refers to the adaptive ability of people to correctly perceive the color of objects under any light source and is an important prerequisite for advanced tasks such as recognition, segmentation and 3D vision. The purpose of color constancy calculation is to estimate the illumination color of the image. In this work, we established the Alexnet network model to accurately estimate the lighting in the scene. The AlexNet model includes an input layer, 8 convolutional layers, AlexNet takes a 512x512 3-channel image patch as input. Compared with the previous network models, the AlexNet model contains several relatively new technical points. For the first time, ReLU, Dropout have been successfully applied in CNN. At the same time, AlexNet also uses GPU for computing acceleration. The illumination color estimation is more robust and stable, and can be combined with the field of color correction of image processing and computer vision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Water is the essence of life, and water pollution is a major threat to all living things on this planet. To provide solutions to help combat water pollution, we have created a device that would help locate and identify the different garbage types underwater. This paper focused on the detection and identification of cans, plastics, polystyrenes, and glass underwater using object detection and object identification by Convolutional Neural Network and Geotagging. The system set-up comprises the following: a webcam, power bank, Raspberry Pi, GPS module, and an improvise floater. The GUI will display the camera's captured video, the number of garbage identified, and its location in coordinates. The testing was done in two ways: different water visibility and different water levels. The identification accuracy of our program is 94.33% for plastics, 97.34% for glass, 96.89% for polystyrenes, 98.22% for cans, and 96.88% for random garbage, reliability for identification is 100% for plastics, 91.67% for glass, 91.67% for polystyrenes, 95.83% for cans, and 91.67% for random garbage. The mean, median, and mode for the visibility levels are 96.375, 98, and 99, and the depth level is 96.385, 98, and 99.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.