KEYWORDS: Data modeling, Performance modeling, Internet of things, Education and training, Deep learning, Machine learning, Decision trees, Systems modeling, Feature extraction, Data fusion
In recent years Internet of Things (IoT) devices have made their way into many different industries. Deep learning and machine learning methodologies have been applied to many IoT-related tasks123 such as intrusion detection systems or anomaly detection. The efficiency of IoT systems is often hindered by anomalies in data present within the system, often leading to undesirable behavior or possibly a full system shutdown. Due to this, the detection of these anomalies is of the utmost importance. Over the years, various traditional and neural network-based machine learning models have emerged for anomaly detection and classification of corrupted IoT data. However, many of these models fail to capture important features in the data which can lead to false anomaly detection or none at all. In this paper we investigate the applicability of using data fusion to improve the detection of data anomalies. This method uses many different models, such as VGG16, Inception, Xception, and ResNet, to extract features from the data. These extracted features are then fused together, to see if the use of multiple models is better than relying on a single model. This paper also provides a detailed analysis of the efficacy of this fusion-based classification method compared to simpler classification methods. This work investigates the applicability of various machine learning and deep learning models, for anomaly detection in various IoT datasets45.
Zigbee is a popular specification for Internet of Things (IoT) mesh networking that provides a suite of protocols built on the IEEE 802.15.4 standard for radio communication. The Zigbee protocol stack is designed as series of layers each with a specific set of functions for communicating data throughout the network. These protocols provide a comprehensive functionality for performing various network tasks, including commissioning new networks and devices, performing broadcasting, unicasting, groupcasting with end-to-end acknowledgement, securing network traffic through AES-128 encryption, and full-packet message authentication. Security features of the Zigbee protocol alone may not be a complete solution for deploying secure IoT networks. It has some vulnerabilities and real-world attacks as discussed in this paper. Zigbee may be improved upon or added to for the purpose of securing it using real-time anomaly detection in IoT Local Area Networks (LANs).
The modern-day Cyber field continues to be plagued with innumerable forms of malware that are created on a massive scale. The ever-changing nature of malware threats combined with the obfuscation techniques used by attackers creates the need for effective methods of malware classification. As of 2018, an average of one million new forms of malware are created world-wide each day, which raises the question of how to combat these attacks. While most antiviruses scan the integrity and composition of files in the system, we propose a new approach to Cyber Defense. As a replacement for standard file scans, we advocate the conversion of the malware binary into a grayscale image for classification and visualization. As discovered by previous research, different types of malware families tend to display similar characteristics and binary patterns between the various malware files in each family. Since there are similarities between the various files of malware in each family, the idea arose to augment these groups with synthetic data generated from a Generative Adversarial Network (GAN). The idea of a constant stream of generated malware leads to the hypothesis that by adding synthetic data based on each family to each family the images are generated from will create a higher learning rate from the Deep Convolutional Neural Network (DCNN). Various architectures of the DCNN will be used as assessments that benchmark each architectures’ learning rate before and after the augmentation.
Abstract: In recent years, the concept of Big Data has become a more prominent issue as the volume of data as well as the velocity in which it is produced exponentially increases. By 2020 the amount of data being stored is estimated to be 44 Zettabytes and currently over 31 Terabytes of data is being generated every second. Algorithms and applications must be able to effectively scale to the volume of data being generated. One such application designed to effectively and efficiently work with Big Data is IBM’s Skylark. Part of DARPA’s XDATA program, an open-source catalog of tools to deal with Big Data; Skylark, or Sketching-based Matrix Computations for Machine Learning is a library of functions designed to reduce the complexity of large scale matrix problems that also implements kernel-based machine learning tasks. Sketching reduces the dimensionality of matrices through randomization and compresses matrices while preserving key properties, speeding up computations. Matrix sketches can be used to find accurate solutions to computations in less time, or can summarize data by identifying important rows and columns. In this paper, we investigate the effectiveness of sketched matrix computations using IBM’s Skylark versus non-sketched computations. We judge effectiveness based on several factors: computational complexity and validity of outputs. Initial results from testing with smaller matrices are promising, showing that Skylark has a considerable reduction ratio while still accurately performing matrix computations.
When several low-resolution images are taken of the same scene, they often contain aliasing and differing subpixel
shifts causing different focuses of the scene. Super-resolution imaging is a technique that can be used to construct
high-resolution imagery from these low-resolution images. By combining images, high frequency components are
amplified while removing blurring and artifacting. Super-resolution reconstruction techniques include methods such as the
Non-Uniform Interpolation Approach, which is low resource and allows for real-time applications, or the Frequency
Domain Approach. These methods make use of aliasing in low-resolution images as well as the shifting property of the
Fourier transform. Problems arise with both approaches, such as limited types of blurred images that can be used or creating
non-optimal reconstructions. Many methods of super-resolution imaging use the Fourier transformation or wavelets but
the field is still evolving for other wavelet techniques such as the Dual-Tree Discrete Wavelet Transform (DTDWT) or the
Double-Density Discrete Wavelet Transform (DDDWT). In this paper, we propose a super-resolution method using these
wavelet transformations for use in generating higher resolution imagery. We evaluate the performance and validity of our
algorithm using several metrics, including Spearman Rank Order Correlation Coefficient (SROCC), Pearson’s Linear
Correlation Coefficient (PLCC), Structural Similarity Index Metric (SSIM), Root Mean Square Error (RMSE), and PeakSignal-Noise
Ratio (PSNR). Initial results are promising, indicating that extensions of the wavelet transformations produce
a more robust high resolution image when compared to traditional methods.
Algorithm selection is paramount in determining how to implement a process. When the results can be computed
directly, an algorithm that reduces computational complexity is selected. When the results less binary there can be difficulty
in choosing the proper implementation. Weighing the effect of different pieces of the algorithm on the final result can be
difficult to find. In this research, we propose using a statistical analysis tool known as General Linear Hypothesis to find
the effect of different pieces of an algorithm implementation on the end result. This will be done with transform based
image fusion techniques. This study will weigh the effect of different transforms, fusion techniques, and evaluation metrics
on the resulting images. We will find the best no-reference metric for image fusion algorithm selection and test this method
on multiple types of image sets. This assessment will provide a valuable tool for algorithm selection to augment current
techniques when results are not binary.
KEYWORDS: Image fusion, Information visualization, Visualization, Receivers, Information fusion, Principal component analysis, Sensors, Image quality, Signal to noise ratio, Infrared imaging
Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).
Automated image fusion has a wide range of applications across a multitude of fields such as biomedical diagnostics, night vision, and target recognition. Automation in the field of image fusion is difficult because there are many types of imagery data that can be fused using different multi-resolution transforms. The different image fusion transforms provide coefficients for image fusion, creating a large number of possibilities. This paper seeks to understand how automation could be conceived for selected the multiresolution transform for different applications, starting in the multifocus and multi-modal image sub-domains. The study analyzes the greatest effectiveness for each sub-domain, as well as identifying one or two transforms that are most effective for image fusion. The transform techniques are compared comprehensively to find a correlation between the fusion input characteristics and the optimal transform. The assessment is completed through the use of no-reference image fusion metrics including those of information theory based, image feature based, and structural similarity based methods.
There is a strong initiative to maximize visual information in a single image for viewing by fusing the salient data from multiple images. Many multi-focus imaging systems exist that would be able to provide better image data if these images are fused together. A fused image would allow an analyst to make decisions based on a single image rather than crossreferencing multiple images. The bandelet transform has proven to be an effective multi-resolution transform for both denoising and image fusion through its ability to calculate geometric flow in localized regions and decompose the image based on an orthogonal basis in the direction of the flow. Many studies have been done to develop and validate algorithms for wavelet image fusion but the bandelet has not been well investigated. This study seeks to investigate the use of the bandelet coefficients versus wavelet coefficients in modified versions of image fusion algorithms. There are many different methods for fusing these coefficients together for multi-focus and multi-modal images such as the simple average, absolute min and max, Principal Component Analysis (PCA) and a weighted average. This paper compares the image fusion methods with a variety of no reference image fusion metrics including information theory based, image feature based and structural similarity based assessments.
Multi-focus image fusion is becoming increasingly prevalent, as there is a strong initiative to maximize visual information in a single image by fusing the salient data from multiple images for visualization. This allows an analyst to make decisions based on a larger amount of information in a more efficient manner because multiple images need not be cross-referenced. The contourlet transform has proven to be an effective multi-resolution transform for both denoising and image fusion through its ability to pick up the directional and anisotropic properties while being designed to decompose the discrete two-dimensional domain. Many studies have been done to develop and validate algorithms for wavelet image fusion, but the contourlet has not been as thoroughly studied. When the contourlet coefficients for the wavelet coefficients are substituted in image fusion algorithms, it is contourlet image fusion. There are a multitude of methods for fusing these coefficients together and the results demonstrate that there is an opportunity for fusing coefficients together in the contourlet domain for multi-focus images. This paper compared the algorithms with a variety of no reference image fusion metrics including information theory based, image feature based and structural similarity based assessments to select the image fusion method.
As technology and internet use grows at an exponential rate, video and imagery data is becoming increasingly important. Various techniques such as Wide Area Motion imagery (WAMI), Full Motion Video (FMV), and Hyperspectral Imaging (HSI) are used to collect motion data and extract relevant information. Detecting and identifying a particular object in imagery data is an important step in understanding visual imagery, such as content-based image retrieval (CBIR). Imagery data is segmented and automatically analyzed and stored in dynamic and robust database. In our system, we seek utilize image fusion methods which require quality metrics. Many Image Fusion (IF) algorithms have been proposed based on different, but only a few metrics, used to evaluate the performance of these algorithms. In this paper, we seek a robust, objective metric to evaluate the performance of IF algorithms which compares the outcome of a given algorithm to ground truth and reports several types of errors. Given the ground truth of a motion imagery data, it will compute detection failure, false alarm, precision and recall metrics, background and foreground regions statistics, as well as split and merge of foreground regions. Using the Structural Similarity Index (SSIM), Mutual Information (MI), and entropy metrics; experimental results demonstrate the effectiveness of the proposed methodology for object detection, activity exploitation, and CBIR.
In recent years, digital cameras have been widely used for image capturing. These devices are equipped in cell phones, laptops, tablets, webcams, etc. Image quality is an important component of digital image analysis. To assess image quality for these mobile products, a standard image is required as a reference image. In this case, Root Mean Square Error and Peak Signal to Noise Ratio can be used to measure the quality of the images. However, these methods are not possible if there is no reference image. In our approach, a discrete-wavelet transformation is applied to the blurred image, which decomposes into the approximate image and three detail sub-images, namely horizontal, vertical, and diagonal images. We then focus on noise-measuring the detail images and blur-measuring the approximate image to assess the image quality. We then compute noise mean and noise ratio from the detail images, and blur mean and blur ratio from the approximate image. The Multi-scale Blur Detection (MBD) metric provides both an assessment of the noise and blur content. These values are weighted based on a linear regression against full-reference y values. From these statistics, we can compare to normal useful image statistics for image quality without needing a reference image. We then test the validity of our obtained weights by R2 analysis as well as using them to estimate image quality of an image with a known quality measure. The result shows that our method provides acceptable results for images containing low to mid noise levels and blur content.
Wavelet transformation has become a cutting edge and promising approach in the field of image and signal processing. A wavelet is a waveform of effectively limited duration that has an average value of zero. Wavelet analysis is done by breaking up the signal into shifted and scaled versions of the original signal. The key advantage of a wavelet is that it is capable of revealing smaller changes, trends, and breakdown points that are not revealed by other techniques such as Fourier analysis. The phenomenon of polarization has been studied for quite some time and is a very useful tool for target detection and tracking. Long Wave Infrared (LWIR) polarization is beneficial for detecting camouflaged objects and is a useful approach when identifying and distinguishing manmade objects from natural clutter. In addition, the Stokes Polarization Parameters, which are calculated from 0°, 45°, 90°, 135° right circular, and left circular intensity measurements, provide spatial orientations of target features and suppress natural features. In this paper, we propose a wavelet-based polarimetry analysis (WPA) method to analyze Long Wave Infrared Polarimetry Imagery to discriminate targets such as dismounts and vehicles from background clutter. These parameters can be used for image thresholding and segmentation. Experimental results show the wavelet-based polarimetry analysis is efficient and can be used in a wide range of applications such as change detection, shape extraction, target recognition, and feature-aided tracking.
KEYWORDS: Signal detection, Wavelets, Detection and tracking algorithms, Data fusion, Neural networks, Signal processing, Image fusion, Electrocardiography, Signal generators, Discrete wavelet transforms
Detecting anomalies in non-stationary signals has valuable applications in many fields including medicine and meteorology. These include uses such as identifying possible heart conditions from an Electrocardiography (ECG) signals or predicting earthquakes via seismographic data. Over the many choices of anomaly detection algorithms, it is important to compare possible methods. In this paper, we examine and compare two approaches to anomaly detection and see how data fusion methods may improve performance. The first approach involves using an artificial neural network (ANN) to detect anomalies in a wavelet de-noised signal. The other method uses a perspective neural network (PNN) to analyze an arbitrary number of “perspectives” or transformations of the observed signal for anomalies. Possible perspectives may include wavelet de-noising, Fourier transform, peak-filtering, etc.. In order to evaluate these techniques via signal fusion metrics, we must apply signal preprocessing techniques such as de-noising methods to the original signal and then use a neural network to find anomalies in the generated signal. From this secondary result it is possible to use data fusion techniques that can be evaluated via existing data fusion metrics for single and multiple perspectives. The result will show which anomaly detection method, according to the metrics, is better suited overall for anomaly detection applications. The method used in this study could be applied to compare other signal processing algorithms.
Image registration is a fundamental enabling technology in computer vision. Developing an accurate image registration
algorithm will significantly improve the techniques for computer vision problems such as tracking, fusion, change detection,
autonomous navigation. In this paper, our goal is to develop an algorithm that is robust, automatic, can perform
multi-modality registration, reduces the Root Mean Square Error (RMSE) below 4, increases the Peak Signal to Noise
Ratio (PSNR) above 34, and uses the wavelet transformation. The preliminary results show that the algorithm is able to
achieve a PSNR of approximately 36.7 and RMSE of approximately 3.7. This paper provides a comprehensive discussion
of wavelet-based registration algorithm for Remote Sensing applications.
We address the problem of improving the performance of wavelet based fractal image compression by applying efficient triangulation methods. We construct iterative function systems (IFS) in the tradition of Barnsley and Jacquin, using non-uniform triangular range and domain blocks instead of uniform rectangular ones. We search for matching domain blocks in the manner of Zhang and Chen, performing a fast wavelet transform on the blocks and eliminating low resolution mismatches to gain speed. We obtain further improvements by the efficiencies of binary triangulations (including the elimination of affine and symmetry calculations and reduced parameter storage), and by pruning the binary tree before construction of the IFS. Our wavelets are triangular Haar wavelets and `second generation' interpolation wavelets as suggested by Sweldens' recent work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.