PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9478 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper utilizes a synchronized Lorenz chaotic drive/response system, which uses Haar filtering and appropriate thresholding in order to detect a transmitted random binary message. Using the Lorenz chaotic attractor to obscure the message, the transmission is passed through an Additive White Gaussian (AWG) channel to successfully retrieve the original binary random data. The detection mechanism employs the Haar Wavelet Transform in combating the channel noise. A communication technique using Chaotic Parameter Modulation (CPM) is simulated in Matlab and prototyped on a reconfigurable hardware platform from Xilinx.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Android's application market relies on secure certificate generation to establish trust between applications and their users; yet, cryptography is often not a priority for application developers and many fail to take the necessary security precautions. Indeed, there is cause for concern: several recent high-profile studies have observed a pervasive lack of entropy on Web-systems leading to the factorization of private keys.1 Sufficient entropy, or randomness, is essential to generate secure key pairs and combat predictable key generation. In this paper, we analyze the security of Android certificates. We investigate the entropy present in 550,000 Android application certificates using the Quasilinear GCD finding algorithm.1 Our results show that while the lack of entropy does not appear to be as ubiquitous in the mobile markets as on Web-systems, there is substantial reuse of certificates only one third of the certificates in our dataset were unique. In other words, we find that organizations frequently reuse certificates for different applications. While such a practice is acceptable under Google's specifications for a single developer, we find that in some cases the same certificates are used for a myriad of developers, potentially compromising Android's intended trust relationships. Further, we observed duplicate certificates being used by both malicious and non-malicious applications. The top 3 repeated certificates present in our dataset accounted for a total of 11,438 separate APKs. Of these applications, 451, or roughly 4%, were identified as malicious by antivirus services.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Being passive systems and due to their proliferation to many regions in the world, the infrared (IR) guided missiles constitute probably the most dangerous threats for the aircraft platforms. Early generation surface-to-air and air-to-air IRguided missiles use reticle-based seekers. One of the IR countermeasure (IRCM) techniques for protecting aircraft platforms against these type of threats is to use a modulated jamming signal. Optimizing the parameters of the modulation is the most important issue for an effective protection. If the required characteristic is not satisfied, jamming may not be successful for protecting the aircraft. There are several parameters to define the jammer signal (modulation) characteristic. Optimizing them requires a good understanding of threat seekers’ operating principles. In the present paper, we consider protection of a helicopter platform against conical-scan reticle based seeker systems and investigate the effect of the jammer signal modulation parameters on jamming performance via extensive batch simulations. The simulations are performed in a MATLAB-coded simulator which models reticle-based conical-scan seeker, aircraft radiation, aircraft motion and jammer system on the aircraft. The results show that if the properties of the jammer signal are similar to those of the reticle-modulated signal in the missile, the jamming can be successful. Otherwise, applied jamming may not deceive the threat seeker.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The OpenCL API provides an abstract mechanism for massively parallel programming on a very wide range of
hardware, including traditional CPUs, GPUs, accelerator devices, FPGAs, and more. However, these different hardware
architectures and platforms function quite differently. Therefore, coding OpenCL applications that are usefully portable
is challenging. Certain considerations are therefore required in developing an effectively portable OpenCL library to
enable parallel application development without requiring fully separate code paths for each target platform.
By making use of device detection and characterization provided by the OpenCL API, valuable information can be
obtained to make runtime decisions for optimization. In particular, the effects of memory affinity change depending on
the memory organization of the device architecture. Work partitioning and assignment depend on the device execution
model, in particular the types of parallel execution supported and available synchronization primitives.
These considerations, in turn, affect the selection and invocation of kernel code. For certain devices, platform-specific
libraries are available, while others can benefit from generated kernel code based on the specified device parameters. By
parameterizing an algorithm based on how these considerations affect performance, a combination of device parameters
can be used to produce an execution strategy that will provide improved performance for that device or collection of
devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The testing and evaluation process developed by the Night Vision and Electronic Sensors Directorate (NVESD) Modeling and Simulation Division (MSD) provides end to end systems evaluation, testing, and training of EO/IR sensors. By combining NV-LabCap, the Night Vision Integrated Performance Model (NV-IPM), One Semi-Automated Forces (OneSAF) input sensor file generation, and the Night Vision Image Generator (NVIG) capabilities, NVESD provides confidence to the M&S community that EO/IR sensor developmental and operational testing and evaluation are accurately represented throughout the lifecycle of an EO/IR system. This new process allows for both theoretical and actual sensor testing. A sensor can be theoretically designed in NV-IPM, modeled in NV-IPM, and then seamlessly input into the wargames for operational analysis. After theoretical design, prototype sensors can be measured by using NV-LabCap, then modeled in NV-IPM and input into wargames for further evaluation. The measurement process to high fidelity modeling and simulation can then be repeated again and again throughout the entire life cycle of an EO/IR sensor as needed, to include LRIP, full rate production, and even after Depot Level Maintenance. This is a prototypical example of how an engineering level model and higher level simulations can share models to mutual benefit.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe the use of various methods of one-dimensional spectral compression by variable selection as well as principal component analysis (PCA) for compressing multi-dimensional sets of spectral data. We have examined methods of variable selection such as wavelength spacing, spectral derivatives, and spectral integration error. After variable selection, reduced transmission spectra must be decompressed for use. Here we examine various methods of interpolation, e.g., linear, cubic spline and piecewise cubic Hermite interpolating polynomial (PCHIP) to recover the spectra prior to estimating at-sensor radiance. Finally, we compressed multi-dimensional sets of spectral transmittance data from moderate resolution atmospheric transmission (MODTRAN) data using PCA. PCA seeks to find a set of basis spectra (vectors) that model the variance of a data matrix in a linear additive sense. Although MODTRAN data are intricate and are used in nonlinear modeling, their base spectra can be reasonably modeled using PCA yielding excellent results in terms of spectral reconstruction and estimation of at-sensor radiance. The major finding of this work is that PCA can be implemented to compress MODTRAN data with great effect, reducing file size, access time and computational burden while producing high-quality transmission spectra for a given set of input conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today’s war field environment is getting versatile as the activities of unconventional wars such as terrorist attacks and cyber-attacks have noticeably increased lately. The damage caused by such unconventional wars has also turned out to be serious particularly if targets are critical infrastructures that are constructed in support of banking and finance, transportation, power, information and communication, government, and so on. The critical infrastructures are usually interconnected to each other and thus are very vulnerable to attack. As such, to ensure the security of critical infrastructures is very important and thus the concept of critical infrastructure protection (CIP) has come. The program to realize the CIP at national level becomes the form of statute in each country. On the other hand, it is also needed to protect each individual critical infrastructure. The objective of this paper is to study on an effort to do so, which can be called the CIP system (CIPS). There could be a variety of ways to design CIPS’s. Instead of considering the design of each individual CIPS, a reference model-based approach is taken in this paper. The reference model represents the design of all the CIPS’s that have many design elements in common. In addition, the development of the reference model is also carried out using a variety of model diagrams. The modeling language used therein is the systems modeling language (SysML), which was developed and is managed by Object Management Group (OMG) and a de facto standard. Using SysML, the structure and operational concept of the reference model are designed to fulfil the goal of CIPS’s, resulting in the block definition and activity diagrams. As a case study, the operational scenario of the nuclear power plant while being attacked by terrorists is studied using the reference model. The effectiveness of the results is also analyzed using multiple analysis models. It is thus expected that the approach taken here has some merits over the traditional design methodology of repeating requirements analysis and system design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Organizational Transformation to a Model Based Engineering Culture is a significant goal for Northrop Grumman Electronic Systems in order to achieve objectives of increased engineering performance. While organizational change is difficult, a focus on connections is creating success. Connections include model to model, program phase to program phase and organization to organization all through Model Based techniques. This presentation will address the techniques employed by Northrop Grumman to achieve these results as well as address continued focus and efforts. Model to model connections are very effective in automating implicit linkages between models for the purpose of ensuring consistency across a set of models and also for rapidly assessing impact of change. Program phase to phase connections are very important for reducing development time as well as reducing potential errors in moving from one program phase to another. Organization to organization communication is greatly facilitated using model based techniques to eliminate ambiguity and drive consistency and reuse.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target detection and tracking is an important element of military operations. Military commanders and personnel have been allocating and continue to allocate resources into the assessment of platforms’ performances in target detection, tracking, identification and classification to assist in decision making regarding the allocation of military assets for surveillance missions. We present an application of target detection and tracking by military aircraft in maritime surveillance mission. We modelled platforms’ capabilities and targets’ motion and performed simulations of the scenario. Performance of military aircraft is quantified by the time taken to complete the mission, the percentage of targets detected and the extent of surveillance completed. Results show that the joint operation of the UAVs and the helicopter gives better detection results than the sole performance of the helicopter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A general nonparametric technique is proposed for the analysis of multi-resolution and multivariate feature space to isolate faulty sensors. The basic overlap function of the technique is an existing one-dimensional fault-detection Brooks-Iyengar algorithm which uses weighted precision and accuracy for static data. We prove the dual of the existing overlap function can isolate the measurement intervals in the multi-dimensional feature space for both labelled and unlabeled publicly available datasets. It is shown that computable complexity of learning the feature space increases linearly with the size of the input. The experimental results showed that by using mean average precision of all sensors using ensemble model for dynamic events. The proposed algorithm performed well in the presence of noise across many static and dynamic action recognition datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A low-cost 3D scanner has been developed with a parts cost of approximately USD $5,000. This scanner uses visible light sensing to capture both structural as well as texture and color data of a subject. This paper discusses the use of this type of scanner to create 3D models for incorporation into a virtual reality environment. It describes the basic scanning process (which takes under a minute for a single scan), which can be repeated to collect multiple positions, if needed for actor model creation. The efficacy of visible light versus other scanner types is also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human activity recognition research relies heavily on extensive datasets to verify and validate performance of activity
recognition algorithms. However, obtaining real datasets are expensive and highly time consuming. A physics-based
virtual simulation can accelerate the development of context based human activity recognition algorithms and techniques
by generating relevant training and testing videos simulating diverse operational scenarios. In this paper, we discuss in
detail the requisite capabilities of a virtual environment to aid as a test bed for evaluating and enhancing activity
recognition algorithms. To demonstrate the numerous advantages of virtual environment development, a newly
developed virtual environment simulation modeling (VESM) environment is presented here to generate calibrated multisource
imagery datasets suitable for development and testing of recognition algorithms for context-based human
activities. The VESM environment serves as a versatile test bed to generate a vast amount of realistic data for training
and testing of sensor processing algorithms. To demonstrate the effectiveness of VESM environment, we present
various simulated scenarios and processed results to infer proper semantic annotations from the high fidelity imagery
data for human-vehicle activity recognition under different operational contexts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dynamic Spectrum Access (DSA) is widely seen as a solution to the problem of limited spectrum, because of its ability
to adapt the operating frequency of a radio. Mobile Ad Hoc Networks (MANETs) can extend high-capacity mobile
communications over large areas where fixed and tethered-mobile systems are not available. In one use case with high
potential impact, cognitive radio employs spectrum sensing to facilitate the identification of allocated frequencies not
currently accessed by their primary users. Primary users own the rights to radiate at a specific frequency and geographic
location, while secondary users opportunistically attempt to radiate at a specific frequency when the primary user is not
using it. We populate a spatial radio environment map (REM) database with known information that can be leveraged in
an ad hoc network to facilitate fair path use of the DSA-discovered links. Utilization of high-resolution geospatial data
layers in RF propagation analysis is directly applicable. Random matrix theory (RMT) is useful in simulating network
layer usage in nodes by a Wishart adjacency matrix. We use the Dijkstra algorithm for discovering ad hoc network node
connection patterns. We present a method for analysts to dynamically allocate node-node path and link resources using
fair division. User allocation of limited resources as a function of time must be dynamic and based on system fairness
policies. The context of fair means that first available request for an asset is not envied as long as it is not yet allocated or
tasked in order to prevent cycling of the system. This solution may also save money by offering a Pareto efficient
repeatable process. We use a water fill queue algorithm to include Shapley value marginal contributions for allocation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper is focused on the building ad-hoc GSM network based on open source software and low-cost hardware. The created Base Transmission Station can be deployed and put into operation in a few minutes in a required area to ensure private communication between connected GSM mobile terminals. The convergence between BTS station and the other networks is possible through IP network. The paper tries to define connection parameters to provide sufficient quality of voice service between the GSM network and IP Multimedia Subsystem. The paper brings practical results of voice call quality measurement between users inside BTS station mobile network and users inside IP Multimedia Subsystem network. The calls are simulated by low-cost embedded solution for speech quality measurement in GSM network. This tool is under development of our laboratory and allows automatic speech quality measurement of any GSM or UMTS mobile network. The Perceptual Evaluation of Speech Quality method is used to get final comparable results. The communication between BTS station and connected networks has to be secured against the interception from the third party. The influence of the securing method for quality of service is presented in detail. Paper, apart from the quality of service measurement section, describes technical requirements for successful interconnection between BTS and IMS networks. The authentication, authorization and accounting methods in roaming between BTS and IMS system are presented too.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.