The Internet of Things (IoT) and other emerging ubiquitous technologies are supporting the rapid spread of smart systems, which has underlined the need for safe, open, and decentralized data storage solutions. With its inherent decentralization and immutability, blockchain offers itself as a potential solution for these requirements. However, the practicality of incorporating blockchain into real-time sensor data storage systems is a topic that demands in-depth examination. While blockchain promises unmatched data security and auditability, some intrinsic qualities, namely scalability restrictions, transactional delays, and escalating storage demands, impede its seamless deployment in high-frequency, voluminous data contexts typical of real-time sensors. This essay launches a methodical investigation into these difficulties, illuminating their underlying causes, potential effects, and potential countermeasures. In addition, we present a novel pragmatic experimental setup and analysis of blockchain for smart system applications, with an extended discussion of the benefits and disadvantages of deploying blockchain based solutions for smart system ecosystems.
Convolutional neural networks (CNNs) are a widely researched neural network architecture that has demonstrated exemplary performance in image processing tasks and applications compared to other popular deep learning and machine learning methods resulting in state-of-the-art performance in many image processing tasks such as image classification and segmentation. CNNs operate on the principle of automated learning of filters or kernels in contrast with hand-crafted digital filters to extrapolate features from images effectively. This paper aims to investigate whether a matrix's determinant can be used to preserve information in CNN convolutional layers. Geometrically the absolute value of the determinant is defined as a scaling factor of the linear transformation resulting from matrix multiplication. When an image's size is reduced into a feature space through a convolutional layer of a CNN, some information is lost. The intuition is that the scaling factor that results from the determinant of the pooling layer matrix can enhance the feature space introducing scaling as a piece of information in the feature space as well as lost relations between adjacent pixels.
Blockchain technology has gained notoriety as the foundation for cryptocurrencies like Bitcoin. However, its possibilities go well beyond that, enabling the deployment of new applications that were not previously feasible as well as enormous improvements to already existing technological applications. Several factors impacting the consensus mechanism must fall within a specific range for a blockchain network to be efficient, sustainable and secure. The long-term sustainability of current networks, like Bitcoin, is in jeopardy due to their relatively uncompromising reconfiguration, which tends to be inflexible, and somewhat independent of environmental circumstances. To provide a systematic methodology for integrating a sustainable and secure adaptive framework, we propose the amalgamation of cognitive dynamic systems theory with blockchain technology, specifically regarding variant network difficulty. A respective architecture was designed with the employment of Long-Short Term Memory (LSTM) to control the difficulty of a network with Proof-of- Work Consensus.
An information filter is one that propagates the inverse of the state error covariance, which is used in the state and parameter estimation process. The term ‘information’ is based on the Cramer-Rao lower bound (CRLB), which states that the mean square error of an estimator cannot be smaller than an amount based on its corresponding likelihood function. The most common information filter (IF) is derived based on the inverse of the Kalman filter (KF) covariance. This paper introduces preliminary work completed on developing the information form of the sliding innovation filter. The SIF is a relatively new type of predictor-corrector estimator based on sliding mode concepts. In this brief paper, the recursive equations used in the sliding innovation information filter (SIIF) are derived and summarized. Preliminary results of application to a target tracking problem are also studied.
KEYWORDS: Tunable filters, Covariance, Signal filtering, Simulations, Gain switching, Covariance matrices, Systems modeling, Modeling, Electronic filtering, Monte Carlo methods
State estimation strategies play an essential role in the effective operation of dynamic systems by extracting relevant information about the system’s state when faced with limited measurement capability, sensor noise, or uncertain dynamics. The Kalman filter (KF) is one of the most commonly used filters and provides an optimal estimate for linear state estimation problems. However, the KF lacks robustness as it does not perform well in the face of modelling uncertainties and disturbances. The sliding innovation filter (SIF) is a newly proposed filter that uses a switching gain and innovation term, and unlike the KF, it only results in a sub-optimal estimate. However, the SIF has been proven to be robust to modelling uncertainties, disturbances, and ill-conditioned problems. In this work, we propose an adaptive SIF and KF (SIF-KF) estimation algorithm that can detect faulty or uncertain conditions and switch between the KF and SIF gain in the absence or presence of such conditions, respectively. A fault detection mechanism based on the normalized innovation squares (NIS) metric is also presented, which is responsible for triggering the activation of the respective gain in the proposed SIF-KF strategy. Experimental simulations are carried out on a simple harmonic oscillator subject to a fault to demonstrate the proposed SIF-KF’s effectiveness over traditional approaches.
KEYWORDS: Blockchain, Mining, Network security, Telecommunications, Internet of things, Computer security, Machine learning, Information security, Distributed computing, Data processing
As the technological landscape continues rapidly evolving, blockchain technology has been widely integrated and employed in various areas of application. Blockchain, at its core, offers a decentralized method for system security and communication. This is in contrast with classical security systems, which necessitate a central node for data processing and communication, therefore augmenting vulnerability to a single point of failure and attack. Incorporating adaptive subsystems into various blockchain technology features might greatly enhance their functionality without jeopardizing the chain's immutability. Several publications have focused on the analysis of network node data in an effort to offer an adaptive version of the consensus mechanism used in the blockchain process. This paper presents a novel adaptive consensus mechanism that regulates the Proof-of-Work mining difficulty based on the perceived anomalous level of network nodes.
In modern industrial settings, the quality of maintenance efforts directly influence equipment’s operational uptime and efficiency. Condition monitoring is a common process employed for predicting the health of a technical asset, whereby a predictive maintenance strategy can be adopted to minimize machine downtime and potential losses. Throughout the field, machine learning (ML) methods have become noteworthy for predicting failures before they occur, thereby preventing significant financial costs and providing a safer workplace environment. These benefits from predictive maintenance techniques, are particularly useful in the context of military equipment. Such equipment is often significantly expensive, and untimely machine failure could result in significant human endangerment. In this paper, a prognostic model (PROGNOS) is proposed to predict military equipment’s remaining useful life (RUL) based on their monitoring signals. The main considerations of PROGNOS are expectation maximization tuned Kalman Filter (EM-KF) for signal filtering, a recently introduced feature extraction algorithm (PCA-mRMR-VIF), and predictive LSTM model with an adaptive sliding window. The viability and performance of the proposed model were tested on a highly complex competition dataset: the NASA aircraft gas turbine engine degradation dataset, wherein readings from multiple sensor channels were recorded for degrading machines. According to testing results, we can confidently say that the proposed PROGNOS model was viable and robust overall, proving its general usefulness on all military equipment that emit signals.
Artificial feedforward neural networks (ANN) have been traditionally trained by backpropagation algorithms involving gradient descent algorithms. This is in order to optimize the network’s weights and parameters in the training phase to minimize the out of sample error in the output during testing. However, gradient descent (GD) has been proven to be slow and computationally inefficient in comparison with studies implementing the extended Kalman filter (EKF) and unscented Kalman filter (UKF) as optimizers in ANNs. In this paper, a new method of training ANNs is proposed utilizing the sliding innovation filter (SIF). The SIF by Gadsden et al. has demonstrated to be a more robust predictor-corrector than the Kalman filters, especially in ill-conditioned situations or the presence of modelling uncertainties. In this paper, we propose implementing the SIF as an optimizer for training ANNs. The ANN proposed is trained with the SIF to predict the Mackey-Glass Chaotic series, and results demonstrate that the proposed method results in improved computation time compared to current estimation strategies for training ANNs while achieving results comparable to a UKF-trained neural network.
Amidst the extensive global integration of computer systems and augmented connectivity, there have been numerous difficulties within ensuring confidentiality, integrity and availability across all systems. Malware is an ever-present and persistent challenge for security systems of all sorts. Numerous malware detection methods have been proposed, with traditional approaches no longer providing the necessary protection against evolving attack methodologies and strategies. In recent years, machine learning for malware detection has been investigated with great success. In addition, the analysis of application operation code, or opcode, due to its unavoidable nature, can reveal necessary information about software intention. Visualization of opcode data allows for simple data augmentation and texture analysis. The proposed approach utilizes a simple visual attention module to perform a binary classification task on program data, focusing on visualized application opcode data. The proposed model is tested with an ARM-based Internet of Things (IoT) application opcode dataset. In addition, a comparative analysis, using numerous metrics, is conducted on the proposed model’s performance along with several other algorithms. The results indicate that the proposed method outperformed all other tested techniques in accuracy, recall, precision, and F-score.
With the ever-increasing adoption of interconnected technologies and rapid digitization observed in modern-day life, many online networks and applications face constant threats to the security and integrity of their operations or services. For example, fraudsters and malicious entities are continuously evolving their techniques and approaches to bypass current measures in place to prevent financial fraud, vandalism in online knowledge bases and social networks like Wikipedia, and malicious cyber-attacks. As such, many of the supervised models proposed to detect these malicious actions face degradations in detection performance and are rendered obsolete over time. Furthermore, fraudulent or anomalous data representing these attacks are often scarce or very difficult to access, which further restricts the performance of supervised models. Generative adversarial networks (GANs) are a relatively new class of generative models that rely on unsupervised learning. Moreover, they have proven to effectively replicate the distributions of real data provided to them. These models can generate synthetic data with a degree of quality such that their resemblance to real data is almost indistinguishable, as demonstrated in image and video applications – like with the rise of DeepFakes. Based on the success of GANs in applications involving image-based data, this study examines the performance of several different GAN architectures as an oversampling technique to address the data imbalance issue in credit card fraud data. A comparative analysis is presented in this paper of different types of GANs used to fabricate training data for a classification model, and their impact on the performance of said classifier. Furthermore, we demonstrate that it is possible to achieve greater detection performance using GANs as an oversampling approach in imbalanced data problems.
KEYWORDS: Clouds, Data storage, Data processing, Computer security, Network architectures, Distributed computing, Network security, Control systems, Computing systems
As data collected through IoT systems worldwide increases and the deployment of IoT architectures is expanded across multiple domains, novel frameworks that focus on application-based criteria and constraints are needed. In recent years, big data processing has been addressed using cloud-based technology, although such implementations are not suitable for latency-sensitive applications. Edge and Fog computing paradigms have been proposed as a viable solution to this problem, expanding the computation and storage to data centers located at the network's edge and providing multiple advantages over sole cloud-based solutions. However, security and data integrity concerns arise in developing IoT architectures in such a framework, and blockchain-based access control and resource allocation are viable solutions in decentralized architectures. This paper proposes an architecture composed of a multilayered data system capable of redundant distributed storage and processing using encrypted data transmission and logging on distributed internal peer-to-peer networks.
Medical image analysis continues to evolve at an unprecedented rate with the integration of contemporary computer systems. Image registration is fundamental to the task of medical image analysis. Traditional methods of medical image registration are extremely time consuming and at times can be inaccurate. Novel techniques, including the amalgamation of machine learning, have proven to be fast, accurate and reliable. However, supervised learning models are difficult to train due to the lack of ground truth data. Therefore, researchers have endeavoured to explore variant avenues of machine learning, including the implementation of unsupervised learning. In this paper, we continue to explore the use of unsupervised learning for the task of image registration across medical imaging. We postulate that a greater focus on channel-wise data can largely improve model performance. To this end, we employ a sequence generation model, a squeeze excitation network, a convolutional neural network variation of long-short term memory and a spatial transformer network for a channel optimized image registration architecture. To test the proposed approach, we utilize a dataset of 2D brain scans and compare the results against a state-of-the-art baseline model.
Malware is a term that refers to any malicious software used to harm or exploit a device, service, or network. The presence of malware in a system can disrupt operations and the availability of information in networks while also jeopardizing the integrity and confidentiality of such information, which poses a grave issue for sensitive and critical operations. Traditional approaches to malware detection often used by antivirus software are not robust in detecting previously unseen malware. As a result, they can often be circumvented by finding and exploiting vulnerabilities of the detection system. This study involves using natural language processing techniques, considering the recent advancements made in the field in recent years, to analyze the strings present in the executable files of malware. Specifically, we propose a topic modeling-based approach whereby the strings of a malware’s executable file are treated as a language abstraction to extract relevant topics, which can then be used to improve a classifier’s detection performance. Finally, through experiments using a publicly available dataset, the proposed approach is demonstrated to be superior in performance to traditional techniques in its detection ability, specifically in terms of performance measures such as precision and accuracy.
Blockchain applications go far beyond cryptocurrency. As an essential blockchain tool, smart contracts are executable programs that establish an agreement between two parties. Millions of dollars of transactions attract hackers at a hastened pace, and cyber-attacks have caused large economic losses in the past. Due to this, the industry is seeking robust and effective methods to detect vulnerabilities in smart contracts to ultimately provide a remedy. The industry has been utilizing static analysis tools to reveal security gaps, which requires an understanding and insight over all possible execution paths to identify known contract vulnerabilities. Yet, the computational complexity increases as the path gets deeper. Recently, researchers have been proposing ML-driven intelligent techniques aiming to improve the efficiency and detection rate. Such solutions can provide quicker and more robust detection options than the traditionally used static analysis tools. As of this publication date, there is currently no published survey paper on smart contract vulnerability detection mechanisms using ML models. In order to set the ground for further development of ML-driven solutions, in this survey paper, we extensively reviewed and summarized a wide variety of ML-driven intelligent detection mechanism from the following databases: Google Scholar, Engineering Village, Springer, Web of Science, Academic Search Premier, and Scholars Portal Journal. In conclusion, we provided our insights on common traits, limitations and advancement of ML-driven solutions proposed for this field.
Highly distributed connected systems, such as the Internet of Things (IoT), have made their way across numerous fields of application. IoT systems present a method for the connection for various heterogeneous devices across the internet, facilitating the efficient distribution, collection and processing of system-related data. However, while system inter connectivity has aided communication and augmented the effectiveness of integrated technology, it has also increased system vulnerability. To this end, researchers have proposed various security protocols and frameworks for IoT ecosystems. Yet while many suggested approaches augment system security, centralization remains an area of concern within IoT systems. Therefore, we propose the use of a decentralization scheme for IoT ecosystems based on Blockchain technology. The proposed method is inspired by Helium, a public wireless long-range network powered by blockchain. Each network node is characterized by its device properties, which are comprised of local and network-level features. Communication in the network requires the testimony of other companion nodes, ensuring that anomalous behaviour is not accepted and thereby preventing malicious attacks of various sorts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.