PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9464, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Situational Understanding and Anomaly Determination
Social media sources such as Twitter have proven to be a valuable medium for obtaining real-time information on breaking events, as well as a tool for campaigning. When tweeters can be characterised in terms of location (e.g., because they geotag their updates, or mention known places) or topic (e.g., because they refer to thematic terms in an ontology or lexicon) their posts can provide actionable information. Such information can be obtained in a passive mode, by collecting data from Twitter's APIs, but even greater value can be gained from an active mode of operation, by engaging with particular tweeters and asking for clarifications or amplifications. Doing so requires knowledge of individual tweeters as "sensing assets". In this paper we show how the use of social media as a kind of sensor can be accommodated within an existing framework for sensor-task matching, by extending existing ontologies of sensors and mission tasks, and accounting for variable information quality. An integrated approach allows tweeters to be "accessed" and "tasked" in the same way as physical sensors (unmanned aerial and ground systems) and, indeed, combined with these more traditional kinds of source. We illustrate the approach using a number of case studies, including field trials (obtaining eyewitness reports from the scene of organised protests) and synthetic experiments (crowdsourced situational awareness).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Context data, collected either from mobile devices or from user-generated social media content, can help identify abnormal behavioural patterns in public spaces (e.g., shopping malls, college campuses or downtown city areas). Spatiotemporal analysis of such data streams provides a compelling new approach towards automatically creating real-time urban situational awareness, especially about events that are unanticipated or that evolve very rapidly. In this work, we use real-life datasets collected via SMU's LiveLabs testbed or via SMU's Palanteer software, to explore various discriminative features (both spatial and temporal - e.g., occupancy volumes, rate of change in topic{specific tweets or probabilistic distribution of group sizes) for such anomaly detection. We show that such feature primitives fit into a future multi-layer sensor fusion framework that can provide valuable insights into mood and activities of crowds in public spaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surveillance cameras have become ubiquitous in society, used to monitor areas such as residential blocks, city streets, university campuses, industrial sites, and government installations. Surveillance footage, especially of public areas, is frequently streamed online in real time, providing a wealth of data for computer vision research. The focus of this work is on detection of anomalous patterns in surveillance video data recorded over a period of months to years. We propose an anomaly detection technique based on support vector data description (SVDD) to detect anomalous patterns in video footage of a university campus scene recorded over a period of months. SVDD is a kernel-based anomaly detection technique which models the normalcy data in a high dimensional feature space using an optimal enclosing hypersphere – samples that lie outside this boundary are detected as outliers or anomalies. Two types of anomaly detection are conducted in this work: track-level analysis to determine individual tracks that are anomalous, and day-level analysis using aggregate scene level feature maps to determine which days exhibit anomalous activity. Experimentation and evaluation is conducted using a scene from the Global Webcam Archive.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The upsurge in interest in "Internet of Things" technologies, solutions and products has brought with it a plethora of new tools to exploit them. Among these there are some more visual tools that provide a "flow based programming" 1 model. These would seem to be applicable to the in-field trials environment where rapid integration and re-configuration of devices under test may be required, especially in a coalition environment where differing interpretation and implementation of draft standards may cause interconnection issues. By using flow-based tools the integration of preconfigured "blocks" can lead to fast results, with higher accuracy than traditional programming techniques. We proposed and deployed one of these, Node-RED2 , at a UK MoD Land Open Systems Architecture (LOSA) field trial in October 2014. This paper describes how the use of the Node-RED visual wiring tool, allowed very fast integration of networked assets during the field trial. These included individual sensors, gateways to soldier systems, and access to a NATO coalition partner's assets (vehicle, soldier and micro-uav).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Coalition operations often invoke the sharing of information and IT infrastructure amongst partners. Whilst there may be a coalition ‘need to share’ data this is often tempered by a ‘need to know’ principle that often prevents valuable information from being exchanged, particularly with classified data. Ideally, coalition partners would wish to share data that can be used to compute specific results that are only relevant to a given operation, without revealing all of the shared information. In this paper we will present the concept of a secure coalition cloud architecture that is capable of storing encrypted data and of performing arbitrary computations on the encrypted data on behalf of users, without at any stage having to decrypt it. To do this we make use of a fully homomorphic encryption scheme using a novel approach for managing encryption and decryption keys in a public key infrastructure (PKI) setting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interoperability and Networking: Architectures and Frameworks
This paper sets out the case that technical ISR (Intelligence, Surveillance and Reconnaissance) System Interoperability is a sub-set of more general Open and Modular Information System design; both address the same architectural issues associated with layers, templates, interface specification, profiles, data models, process control and assurance.
The paper develops a set of frameworks and models to enable those ISR specialists concerned with interoperability to engage with those concerned with open and modular information system infrastructures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the Army Intelligence domain, Processing, Exploitation, and Dissemination (PED) is the process that is used to convert and collect information into actionable intelligence and then distribute this intelligence appropriately to those who make decisions and execute the tasks and missions that this intelligence process supports. In today’s Intelligence domain, information is gathered from an abundance of sources and these sensors create an exponential amount of data output. PED is a time sensitive process, which is also constrained by manpower and the extremely limited tactical bandwidth. Currently, PED is primarily a higher echelon activity, but as information gathering increases at the platforms it makes sense to automate PED tasks and execute these tasks closer to the sensor sources. Providing an architecture that will allow for processing sensor data more intelligently at various locations within the Intel network to include: on-board a UAV or vehicle, COIST, and higher echelons can help to alleviate these constraints by positioning the sensor fusion as close as possible to minimize bandwidth utilization. However, this architecture will implicitly need a way to share data to enable fusion. While any given mission may require fusion of just a few sensor data sources, which can be accomplished with point-to-point integration, this approach does not scale and is not maintainable, since the range of all missions will require a combination of any number of data sources and this approach will most likely require extra development to handle new sources. Therefore, there needs to be a way to share and reuse date that is extensible, maintainable, and not tied to any one mission type. This approach will reduce duplication, provide common patterns for accessing information and support future growth. This paper describes how a common ontology can be used to transform intelligence data from any number of disparate sources to a higher level of integration where one uses the logical understanding of the domain to share knowledge between sources. This paper will discuss sensor ontology efforts to date, introduce the Common Core Ontologies which provide the common upper and mid-level semantics which are inherited by domain level ontologies and describe future experimentation. The paper will discuss the role of the Common Core Ontologies development and governance practices in producing a logically consistent data set, which can be accessed through a single API. By utilizing this approach, sensor outputs can be fused using inferencing, entity and event resolution, and other 3rd party analytic apps. Finally, the paper will also describe how ontologies are leveraged to enable tasking, analytics, rules based reasoning, and distributed processing which are functional components currently being utilized or developed to support the PED process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Sandia Architecture for Heterogeneous Unmanned System Control (SAHUC) was produced as part of a three year internally funded project performed by Sandia’s Intelligent Systems, Robotics, and Cybernetics group (ISRC). ISRC created SAHUC to demonstrate how teams of Unmanned Systems (UMS) can be used for small-unit tactical operations incorporated into the protection of high-consequence sites. Advances in Unmanned Systems have provided crucial autonomy capabilities that can be leveraged and adapted to physical security applications. SAHUC applies these capabilities to provide a distributed ISR network for site security. This network can be rapidly re-tasked to respond to changing security conditions. The SAHUC architecture contains multiple levels of control. At the highest level a human operator inputs objectives for the network to accomplish. The heterogeneous unmanned systems automatically decide which agents can perform which objectives and then decide the best global assignment. The assignment algorithm is based upon coarse metrics that can be produced quickly. Responsiveness was deemed more crucial than optimality for responding to time-critical physical security threats. Lower levels of control take the assigned objective, perform online path planning, execute the desired plan, and stream data (LIDAR, video, GPS) back for display on the user interface. SAHUC also retains an override capability, allowing the human operator to modify all autonomous decisions whenever necessary. SAHUC has been implemented and tested with UAVs, UGVs, and GPS-tagged blue/red force actors. The final demonstration illustrated how a small fleet, commanded by a remote human operator, could aid in securing a facility and responding to an intruder.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the modern information environment continues to expand with new technologies, military Command and Control (C2) has increasing access to unprecedented amounts of data and analytic resources to support military decision making. However, with the increasing quantity and heterogeneity of multi-INT data—from new collection platforms, new sensors, and new analytic tools—comes a growing information fusion challenge. For example, increasingly distributed processing, exploitation, and dissemination (PED) capabilities and analyst intelligence resources must identify and integrate the most relevant data sources to support and improve operational command and control and situation awareness without becoming overwhelmed by data and potentially missing critical information. We present an innovative new information fusion and organizational decision-making architecture—Dual Node Decision Wheels (DNDW)—that integrates multi-INT PED, information analysis, and C2 processes through a novel combination of goal-directed information fusion and data-driven decision making, helping alleviate “big data” challenges through more fluid coordination of organizations and technologies. DNDW applies the dual node network for fusion and resource management with semantic links between organizational processes and decision aides, ensuring that each organizational role has access to the right information. DNDW can map fusion onto any organizational structure and provide a cost-effective solution methodology for integrating new technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Network Sensing and Processing for ISR Applications I
This paper briefly describes the set-up of the sensors and the instrumentation deployed by the French-German Research Institute of Saint-Louis (ISL) during the last NATO/ACG3/SG2 HFI Threat Data Collection (trial conducted during the summer 2014 in the Czech Republic). Measurements of acoustic signals generated by small-caliber weapons, ammunition, and rockets were carried out for the development of Hostile Fire Indicator (HFI) systems. For ground bases, imaging systems were associated with acoustic sensors in order to provide complementary information and better permanent surveillance / sniper detection. Our basic approach is to combine several technologies developed at ISL: acoustic detection, fusion of distributed sensor data, active imaging and 3D audio restitution of the threat.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synchronization of Intelligence, Surveillance, and Reconnaissance (ISR) activities to maximize the utilization of limited resources (both in terms of quantity and capability) has become critically important to military forces. In centralized frameworks, a single node is responsible for determining and disseminating decisions (e.g., tasks assignments) to all nodes in the network. This requires a robust and reliable communication network. In decentralized frameworks, processing of information and decision making occur at different nodes in the network, reducing the communication requirements. This research studies the degradation of solution quality (i.e., potential information gain) as a centralized system synchronizing ISR activities moves to a decentralized framework. The mathematical programming model of previous work1 has been extended for multi-perspective optimization in which each collection asset develops its own decisions to support mission objectives based only on its perspective of the environment. Different communication strategy are considered. Collection assets are part of the same communication network (i.e., a connected component) if: (1) a fully connected network exists between the assets in the connected component, or (2) a path (consisting of one or more communication links) between every asset in the connected component exists. Multiple connected components may exist among the available collection assets supporting a mission. Information is only exchanged when assets are part of the same network. The potential location of assets that are not part of a connected component can be considered (with a suitable decay factor as a function of time) as part of the optimization model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Novel techniques are necessary in order to improve the current state-of-the-art for Aided Target Recognition (AiTR) especially for persistent intelligence, surveillance, and reconnaissance (ISR). A fundamental assumption that current AiTR systems make is that operating conditions remain semi-consistent between the training samples and the testing samples. Today’s electro-optical AiTR systems are still not robust to common occurrences such as changes in lighting conditions. In this work, we explore the effect of systemic variation in lighting conditions on vehicle recognition performance. In addition, we explore the use of low-dimensional nonlinear representations of high-dimensional data derived from electro-optical synthetic vehicle images using Manifold Learning - specifically Diffusion Maps on recognition. Diffusion maps have been shown to be a valuable tool for extraction of the inherent underlying structure in high-dimensional data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Network Sensing and Processing for ISR Applications II
In this paper, we address the problem of multiple ground target tracking and classification with data from an unattended wireless sensor network. A multiple target tracking algorithm, taking into account the road and vegetation information, is studied in a centralized architecture. Despite of efficient algorithms proposed in the literature, we must adapt a basic approach to satisfy embedded processing. The algorithm enables tracking human and vehicles driving both on and off road. Based on our previous works, we integrate road or trail width and vegetation cover, in motion model to improve performance of tracking under constraint. Our algorithm also presents different dynamic models, to palliate the maneuvers of targets including a stop motion model. In order to handle realistic ground target tracking scenarios, the tracking algorithm is integrated into an operational platform (named fusion node) which is an autonomous smart computer abandoned in the surveillance area. After the calibration step of the heterogeneous sensor network, our system is able to handle real data from a wireless ground sensor network. The performance of system is evaluated in a real exercise for Forward Operating Base (FOB) protection and road surveillance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a technical solution that provides microcontroller-scale devices with compatibility to the open-standard Lean Services Architecture used in the UK MoD Land Open Systems Architecture. The paper describes how low powered microcontrollers can achieve interoperability by using the Lean Services on-the-wire binary format. We show how the use of the Micro Services Architecture by microcontroller devices increases the number of systems available for integration by a factor of 20, providing interoperability from the largest enterprise system down to tiny devices in the tactical environment using a single and consistent technique. The variations between the Lean Services Architecture and the Micro Services Architecture are described. The rational is explained for the decisions made in adapting to the very low computing power available in some microcontrollers. The described technique provides; a) service orientated architecture interoperability for microcontroller-level devices; b) compatibility with the Lean Services Architecture; c) compatibility with LOSA allowing microcontroller devices to interoperate with other LOSA systems both on a local area network and across tactical radio links; d) roadmap for future enhancements; e) software toolkits to allow manufacturers to integrate the micro services architecture into their microcontroller driven devices. The architecture re-uses existing Lean Services techniques and leverages the UK MoD Generic Soldier Architecture and LOSA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we address the problem of decentralized visibility-based target tracking for a team of mobile observers trying to track a team of mobile targets. Based on the results of previous work, we address the problem when the graph that models the communication network within the team of the obsrevers is not complete. We propose a hierarchical approach. At the upper level, each observer is allocated to a target through a local minimum cost matching. At the lower level, each observer computes its navigation strategy based on the results of the single observer-single target problem, thereby, decomposing a large multi-agent problem into several 2- agent problems. Finally, we evaluate the performance of the proposed strategy in simulations and experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensor, Data and Information Processing and Fusion Algorithms
This work addresses the problem of localizing a mobile intruder on a road network with a small UAV through fusion of event-based `hard data' collected from a network of unattended ground sensors (UGS) and `soft data' provided by human dismount operators (HDOs) whose statistical characteristics may be unknown. Current approaches to road network intruder detection/tracking have two key limitations: predictions become computationally expensive with highly uncertain target motions and sparse data, and they cannot easily accommodate fusion with uncertain sensor models. This work shows that these issues can be addressed in a practical and theoretically sound way using hidden Markov models (HMMs) within a comprehensive Bayesian framework. A formal procedure is derived for automatically generating sparse Markov chain approximations for target state dynamics based on standard motion assumptions. This leads to efficient online implementation via fast sparse matrix operations for non-Gaussian localization aboard small UAV platforms, and also leads to useful statistical insights about stochastic target dynamics that could be exploited by autonomous UAV guidance and control laws. The computational efficiency of the HMM can be leveraged in Rao-Blackwellized sampling schemes to address the problem of simultaneously fusing and characterizing uncertain HDO soft sensor data via hierarchical Bayesian estimation. Simulation results are provided to demonstrate the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In vehicle target classification, contact sensors have frequently been used to collect data to simulate laser vibrometry data. Accelerometer data has been used in numerous literature to test and train classifiers instead of laser vibrometry data [1] [2]. Understanding the key similarities and differences between accelerometer and laser vibrometry data is essential to keep progressing aided vehicle recognition systems. This paper investigates the contrast of accelerometer and laser vibrometer data on classification performance. Research was performed using the end-to-end process previously published by the authors to understand the effects of different types of data on the classification results. The end-to-end process includes preprocessing the data, extracting features from various signal processing literature, using feature selection to determine the most relevant features used in the process, and finally classifying and identifying the vehicles. Three data sets were analyzed, including one collection on military vehicles and two recent collections on civilian vehicles. Experiments demonstrated include: (1) training the classifiers using accelerometer data and testing on laser vibrometer data, (2) combining the data and classifying the vehicle, and (3) different repetitions of these tests with different vehicle states such as idle or revving and varying stationary revolutions per minute (rpm).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imagery from unmanned aerial systems (UAS) needs compression prior to transmission to a receiver for further processing. Once received, automated image exploitation algorithms, such as frame-to-frame registration, target tracking, and target identification, are performed to extract actionable information from the data. Unfortunately, in a compress-then-analyze system, exploitation algorithms must contend with artifacts introduced by lossy compression and transmission. Identifying metrics that enable compression engines to predict exploitation degradation could allow encoders the ability of tailoring compression for specific exploitation algorithms. This study investigates the impact of H.264 and JPEG2000 compression on target tracking through the use of a multi-hypothesis blob tracker. Used quality metrics include PSNR, VIF, and IW-SSIM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unmanned aerial systems (UAS) equipped with electro-optic (EO) full motion video (FMV) sensors often need to transmit image sequences over a limited communications channel, requiring either intense compression, reduced frame rate, or reduced resolution to reach the receiver. In an attempt to improve rate-distortion performance of common video compression algorithms, such as H.264/AVC, several groups are developing compres- sion methods to improve video quality at low bitrates. Concepts of these next generation methods, including H.265/HEVC, Google’s VP9, and Xiph.org’s Daala are examined in contrast to H.264/AVC, BBC’s Dirac, and Motion-JPEG2000 within the context of aerial surveillance. We present a compression performance analysis of these algorithms according to PSNR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current US Army issued binoculars lack the digital capabilities of today’s electro-optic devices. By linking traditional optical binoculars with a smartphone, users can take advantage of the smartphone’s digital camera. Live images viewed through the binocular can be captured as an image or recorded as video in real time. Additional capabilities of the Smartphone can be utilized such as, digital zoom on top of the binoculars optical magnification, GPS for geo-tagging information, wireless communication for transmission of recorded data, etc. The linking of Commercial Off-The-Shelf (COTS) smartphones with optical based binoculars has shown enormous potential including persistent ISR capability. The paper discusses the demonstration, results and lessons learned of B-LINK-S applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Beamforming techniques are used to locate sources and scattering centers from data acquired by either passive or active phased arrays. The technique has a wide variety of applications from far field source location and tracking to near field imaging. The presence of inhomogeneities in the environment will have an effect on the propagation of the field. This in turn will change the results of a beamformer prediction. Using simulated sources and environments one can systematically study the effect of the atmosphere on the angle of arrival as seen by the array. From these studies attempts at systematic corrections can be tested to evaluate their fidelity in a real system. We present the results of a series of studies on an acoustic field in the presence of sound speed fluctuation and steady wind profiles and demonstrate how various terms in the environment contribute to changes in beamformer processing results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently Laser Doppler Vibrometry (LDV) has been widely employed to achieve long-range sensing in military applications, due to its high spatial and spectral resolutions in vibration measurements that facilitates effective analysis using signal processing and machine learning techniques. Based on the collaboration of The City College of New York and the Air Force Research Laboratory in the last several years, we have developed a bank of algorithms to classify different types of vehicles, such as sedans, vans, pickups, motor-cycles and buses, and identify various kinds of engines, such as Inline-4, V6, 1- and 2-axle truck engines. Thanks to the similarities of the LDV signals to acoustic and other time-series signals, a large of body of existing approaches in literature has been employed, such as speech coding, time series representation, Fourier analysis, pyramid analysis, support vector machine, random forest, neural network, and deep learning algorithms. We have found that the classification results based on some of these methods are extremely promising. For instance, our vehicle engine classification algorithm based on the pyramid Fourier analysis of the engine vibration and fundamental frequencies of vehicle surfaces over the data collected by our LDV in the summer of 2014 have consistently attained 96% precision. In laboratory studies or well-controlled environments, a great array of high quality LDV measured points all over the vehicles are permitted by the vehicle owners, therefore extensive classifier training can be conducted to effectively capture the innate properties of surfaces in the space and spectral domains. However, in real contested environments, which are of utmost interest and practical importance to military applications, the uncooperative vehicles are either fast moving or purposively concealed and thus not many high quality LDV measurements can be made. In this work an intensive study is performed to compare the performance in vehicle classifications under the cooperative and uncooperative environments via LDV measurements based on a content-based indexing approach. The method uses an iterative Fourier analysis and an artificial feed-forward neural network. As our empirical studies have suggested, even in uncooperative and contested environments, with adequate training dataset for similar vehicles, our classification approach can still yield promising recognition rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One significant technological barrier to enabling multi-sensor integrated ISR is obtaining an accurate understanding of the uncertainty present from each sensor. Once the uncertainty is known, data fusion, cross-cueing, and other exploitation algorithms can be performed. However, these algorithms depend on the availability of accurate uncertainty information from each sensor.
In many traditional systems (e.g., a GPS/IMU-based navigation system), the uncertainty values for any estimate can be derived by carefully observing or characterizing the uncertainty of its inputs and then propagating that uncertainty through the estimation system.
In this paper, we demonstrate that image registration uncertainty, on the other hand, cannot be characterized in this fashion. Much of the uncertainty in the output of a registration algorithm is due to not only the sensors used to collect the data, but also data collected and the algorithms used. In this paper, we present results of an analysis of feature-based image registration uncertainty. We make use of Monte Carlo analysis to investigate the errors present in an image registration algorithm. We demonstrate that the classical methods of propagating uncertainty from the inputs to the outputs yields significant under-estimates of the true uncertainty on the output. We then describe at least two possible sources of additional error present in feature-based methods and demonstrate the importance of these sources of error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a comparison of YUV color video formats using the H.264 video compression standard. The goal of this paper is to determine the best color video format for rate-distortion quality. This is a point of interest due to wireless video transmission quality and bandwidth limitations. The results show that for 1080p video, the YUV 4:2:0 video with a chrominance quantization parameter offset of zero is among the best performing color video format in terms of rate-distortion quality for three different levels of compression and two different entropy encodings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.