Simultaneous Localization and Mapping (SLAM) is one of the most challenging research areas within computer and machine vision for automated scene commentary and explanation. The SLAM technique has been a developing research area in the robotics context during recent years. By utilizing the SLAM method robot can estimate the different positions of the robot at the distinct points of time which can indicate the trajectory of robot as well as generate a map of the environment. SLAM has unique traits which are estimating the location of robot and building a map in the various types of environment. SLAM is effective in different types of environment such as indoor, outdoor district, Air, Underwater, Underground and Space. Several approaches have been investigated to use SLAM technique in distinct environments. The purpose of this paper is to provide an accurate perceptive review of case history of SLAM relied on laser/ultrasonic sensors and camera as perception input data. In addition, we mainly focus on three paradigms of SLAM problem with all its pros and cons. In the future, use intelligent methods and some new idea will be used on visual SLAM to estimate the motion intelligent underwater robot and building a feature map of marine environment.
We consider geodesic distance transformations for digital images. Given a M × N digital image, a distance image is produced by evaluating local pixel distances. Distance Transformation on Curved Space (DTOCS) evaluates shortest geodesics of a given pixel neighborhood by evaluating the height displacements between pixels. In this paper, we propose an optimization framework for geodesic distance transformations in a pattern recognition scheme, yielding more accurate machine learning based image analysis, exemplifying initial experiments using complex breast cancer images. Furthermore, we will outline future research work, which will complete the research work done for this paper.
The amount of data generated by ultraspectral sounders is so large that considerable savings in data storage and transmission bandwidth can be achieved using data compression. Due to this large amount of data, the data compression time is of utmost importance. Increasing the programmability of the commodity Graphics Processing Units (GPUs) offer potential for considerable increases in computation speeds in applications that are data parallel. In our experiments, we implemented a spectral image data compression method called Linear Prediction with Constant Coefficients (LP-CC) using NVIDIA's CUDA parallel computing architecture. LP-CC compression method represents a current state-of-the-art technique in lossless compression of ultraspectral sounder data. The method showed an average compression ratio of 3.39 when applied to publicly available NASA AIRS data. We achieved a speed-up of 86 compared to a single threaded CPU version. Thus, the commodity GPU was able to significantly decrease the computational time of a compression algorithm based on a constant coefficient linear prediction.
KEYWORDS: Data compression, Image compression, Data storage, Computer architecture, Computer programming, Satellite communications, Satellites, Data communications, Current controlled current source, Imaging systems
The amount of data generated by hyper- and ultraspectral imagers is so large that considerable savings in data storage and transmission bandwidth can be achieved using data compression. Due to the large amount of data, the data compression time is of importance. Increasing programmability of commodity Graphics Processing Units (GPUs) allows their usage as General Purpose computation on Graphical Processing Units (GPGPU). GPUs offer potential for considerable increase in computation speed in applications that are data parallel. Data parallel computation on image data executes the same program on many image pixels on parallel. We have implemented a spectral image data compression method called Linear Prediction with Constant Coefficients (LP-CC) using Nvidia's CUDA parallel computing architecture. CUDA is a parallel programming architecture that is designed for data-parallel computation.
CUDA hides the GPU hardware from the developers. Moreover, CUDA does not require the programmers to explicitly manage threads. This simplifies the programming model. Our GPU implementation is experimentally compared to the native CPU implementation. Our speed-up factor was over 30 compared to a single threaded CPU version.
We propose a novel method for lossless compression of ultraspectral sounder data. The method utilizes spectral linear prediction and the optimal ordering of the granules. The prediction coefficients for a granule are computed using prediction coefficients that are optimized using a different granule. The optimal ordering problem is solved using Edmonds's algorithm for optimume branching. The results show that the proposed method outperforms previous methods on publicly available NASA AIRS data.
We present the implementation of a lossless hyperspectral image compression method for novel parallel environments. The method is an interband version of a linear prediction approach for hyperspectral images. The interband linear prediction method consists of two stages: predictive decorrelation that produces residuals and the entropy coding of the residuals. The compression part is embarrassingly parallel, while the decompression part uses pipelining to parallelize the method. The results and comparisons with other methods are discussed. The speedup of the thread version is almost linear with respect to the number of processors.
Methods for noise reduction in multicomponent spectral images are developed and discussed. Multicomponent spectral images can be corrupted by noise either on all the channels or on some of the channels only. In the first case there are two possibilities: either the noise is on all the channels in the same way or the noise is randomly distributed on all the channels. We studied two methods for noise reduction directly on the multicomponent spectral image: the vector median filter and our new method, the spectrum smoothing, which does not care about neighbouring pixels but tries to reduce noise on one pixel at a time. The idea behind spectrum smoothing lies on the nature of a color spectrum. Color spectrum is naturally smooth, and does not have any peaks, unlike a noisy spectrum would have. If some of the channels are noisy, there is a problem of
finding the noisy channels. We came into a conclusion that if a channel correlates poorly with the neighboring channel,
the channel can be considered noisy, and filtering is applied to that channel. Results from our new spectrum smoothing filter were very promising for Gaussian noise compared to Gaussian 3 by 3 filter and mean 5 by 5 filter.
Several powerful lossy compression methods have been developed for hyperspectral images. However, it is difficult to determine sufficient quality for reconstructed hyperspectral images. We have measured the information loss from the lossy compression with Signal-to-Noise-Ratio (SNR) and Peak-Signal-to-Noise-Ratio (PSNR). To get more illustrative error measures unsupervised K-means clustering combined with spectral matching methods was used. Spectral matching methods include Euclidean distance, Spectral Similarity Value (SSV) and Spectral Angle Mapper (SAM). We used two AVIRIS radiance images, which were compressed with three different methods: the Self-Organizing Map (SOM), Principal Component Analysis (PCA) and three-dimensional wavelet transform combined with lossless BWT/Huffman encoding. The two-dimensional JPEG2000 compression method was applied to the eigenimages produced by the PCA. It was found that clustering combined with spectral matching is a good method to realize the image quality for many applications. The high classification accuracies have been achieved even at very high compression ratios. The SAM and the SSV are much more vulnerable for information loss caused by the lossy compression than the Euclidean distance. The results suggest that lossy compression is possible in many real-world segmentation applications. The PCA transform combined with JPEG2000 was the best compression method according to all error metrics.
This paper proposes an improvement to an interband version of the linear prediction approach for lossless compression of hyperspectral images. The improvements consisted of the use of non-predictable bands and the varied size of the sample set. Our improved method achieved an average compression ratio of 3.19 using 13 Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) images, compared to 3.08 in the basic method.
We have composed several lossy compression methods for multispectral images. These methods include the Self-Organizing Map (SOM), Principal Component Analysis (PCA) and the three-dimensional wavelet transform combined with traditional lossless coding methods. The two-dimensional DCT/JPEG, JPEG2000 and SPIHT compression methods were applied to eigenimages produced by the PCA. The information loss from the compression was measured with Signal-to-Noise-Ratio (SNR) and Peak-Signal-to-Noise ratio (PSNR). To get more illustrative error measures C-means clustering and Euclidean distance for spectral matching were used. The test image was an AVIRIS image with 224 bands and 512 lines in 614 columns. The PCA in the spectral dimension was the best method in terms of image quality and compression speed. If required, JPEG2000 or SPIHT can be applied in spatial dimensions to get better compression ratios.
This paper proposes an interband version of the linear prediction approach for hyperspectral images. Linear prediction represents one of the best performing and most practical and general purpose lossless image compression techniques known today. The interband linear prediction method consists of two stages: predictive decorrelation producing residuals and entropy coding of the residuals. Our method achieved a compression ratio in the range of 3.02 to 3.14 using 13 Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) images.
In this paper, a new group of noise reduction methods for multispectral images is presented. First, a 1-dimensional Self-Organizing Map (SOM) is taught using the pixel vectors of the noisy multispectral image. Then, a gray-level index image is formed containing the indexes of the SOM vectors. Several gray-level noise reduction methods are applied to the index image for three noise types: impulse, Gaussian, and coherent noise. Tests are made for three kinds of noise distrubutions: for all channels, for channels 30-50, and for 9 selected channels. Error measures imply that the obtained results are very good for coherent noise images, but rather poor for other noise categories, compared to the bandwise coherent filter.
In this paper ,a new method for edge detection in multispectral imags is presented. It is based on the use of the Self-Organizing Map (SOM), Peano scan and a conventional edge detector. The method presented in this paper order the vectors of the original image in such a way that vectors that are near each other according to some similarity criterium have scalar ordering values near each other. This is achieved using a 2D self-organizing map and the Peano scan. After ordering, the original vector image reduces to a gray-value image, and a conventional edge detector can be applied. In this paper, the Laplace and the Canny edge detectors are used. It is shown, that using the proposed method sit is possible to find the same relevant edges that R-ordering based methods find. Furthermore, it is also possible to find edges in images which consist of metameric colors, i.e. images in which every pixel vector maps into the same location in RGB space. This is not possible using conventional edge detectors which use an RGB image as input. Finally, the new method is tested with a real-world airplane image, giving results comparable with R-ordering based methods.
In this paper, a new fast image compression method utilizing an improved version of the Distance Function On Curved Space (DTOCS), is presented. The maximas of distances are used directly to select control points. Also a new concept, a varying structuring element, a curvature constant, is applied to the calculation of distances. Its influence on compression results is studied.
In this paper, two control point-based image compression methods are presented. In the first method, the encoding is based on the roughness of the surface defined by the gray levels of the image. The second method utilizes a new distance transform, called the Distance Transform on Curved Space (DTOCS). The related compression ratios of both of the methods are very good. Also the computations needed are quite simple and require only a short processing time. The study includes the investigation of properties of the control point-based interpolation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.