PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 7347, including the Title Page, Copyright information, table of Contents, Introduction (if any), and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Making best use of multi-point observations and sensor information to forecast future events in complex real time
systems is a challenge which presents itself in many military and industrial problem domains. The first step in tackling
these challenges is to analyze and understand the data. Depending on the algorithm used to forecast a future event,
improvements to a prediction can be realized if one can first determine the nature and extent of variable correlations, and
for the purposes of prediction, quantify the strength of the correlations of input variables to output variables. This is no
easy task since sensor readings and operator logs are sometimes inconsistent and/or unreliable, some catastrophic
failures can be almost impossible to predict, and time lags and leads in a given system may vary from one day to the
next. Correlation analysis techniques can help us deal with some of these problems. They allow us to find out what
variables may be strongly correlated to major events. After detecting where the strongest correlations exist, one must
choose a model which can best predict the possible outcomes that could occur for a number of possible scenarios. The
model must be tested and evaluated, and sometimes it is necessary to go back to the feature selection stage of the model
design process and reevaluate the available sensory data and inputs. An industrial process example is adopted in this
research to both highlight the issues that arise in complex systems and to demonstrate methods of addressing such issues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In evolutionary learning, the sine qua non is evolvability, which requires heritability of fitness and a balance between
exploitation and exploration. Unfortunately, commonly used fitness measures, such as root mean squared error (RMSE),
often fail to reward individuals whose presence in the population is needed to explain important data variance; and
indicators of diversity generally are not only incommensurate with those of fitness but also essentially arbitrary. Thus,
due to poor scaling, deception, etc., apparently relatively high fitness individuals in early generations may not contain
the building blocks needed to evolve optimal solutions in later generations. To reward individuals for their potential
incremental contributions to the solution of the overall problem, heritable information theoretic functionals are
developed that incorporate diversity considerations into fitness, explicitly identifying building blocks suitable for
recombination (e.g. for non-random mating). Algorithms for estimating these functionals from either discrete or
continuous data are illustrated by application to input selection in a high dimensional industrial process control data set.
Multiobjective information theoretic ensemble selection is shown to avoid some known feature selection pitfalls.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Throughout the animal kingdom there are many existing sensory systems with capabilities desired by the human designers of new sensory and computational systems. There are a few basic design principles constantly observed among these natural mechano-, chemo-, and
photo-sensory systems, principles that have been proven by the test of time. Such principles include non-uniform sampling and processing, topological computing, contrast enhancement by localized signal inhibition, graded localized signal processing, spiked signal transmission, and coarse coding, which is the computational transformation of raw data using broadly overlapping filters. These principles are outlined here with references to natural biological sensory systems as well as successful biomimetic sensory systems exploiting these natural design concepts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Extending the notion of data models or object models, ontology can provide rich semantic definition not only to the
meta-data but also to the instance data of domain knowledge, making these semantic definitions available in machine
readable form. However, the generation of an effective ontology is a difficult task involving considerable labor and skill.
This paper discusses an Ontology Generation and Evolution Processor (OGEP) aimed at automating this process, only
requesting user input when un-resolvable ambiguous situations occur. OGEP directly attacks the main barrier which
prevents automated (or self learning) ontology generation: the ability to understand the meaning of artifacts and the
relationships the artifacts have to the domain space. OGEP leverages existing lexical to ontological mappings in the
form of WordNet, and Suggested Upper Merged Ontology (SUMO) integrated with a semantic pattern-based structure
referred to as the Semantic Grounding Mechanism (SGM) and implemented as a Corpus Reasoner. The OGEP
processing is initiated by a Corpus Parser performing a lexical analysis of the corpus, reading in a document (or corpus)
and preparing it for processing by annotating words and phrases. After the Corpus Parser is done, the Corpus Reasoner
uses the parts of speech output to determine the semantic meaning of a word or phrase. The Corpus Reasoner is the crux
of the OGEP system, analyzing, extrapolating, and evolving data from free text into cohesive semantic relationships.
The Semantic Grounding Mechanism provides a basis for identifying and mapping semantic relationships. By blending
together the WordNet lexicon and SUMO ontological layout, the SGM is given breadth and depth in its ability to
extrapolate semantic relationships between domain entities. The combination of all these components results in an
innovative approach to user assisted semantic-based ontology generation. This paper will describe the OGEP technology
in the context of the architectural components referenced above and identify a potential technology transition path to
Scott AFB's Tanker Airlift Control Center (TACC) which serves as the Air Operations Center (AOC) for the Air
Mobility Command (AMC).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although more information than ever before is available to support the intelligence
analyst, the vast proliferation of types of data, devices, and protocols makes it increasingly
difficult to ensure that the right information is received by the right people at the right time.
Analysts struggle to balance information overload and an information vacuum depending on
their location and available equipment. The ability to securely manage and deliver critical
knowledge and actionable intelligence to the analyst regardless of device configuration,
classification level or location in a reliable manner, would provide the analyst 24/7 access to
useable information. There are several important components to an intuitive system that can
provide timely information in a user-preferred manner. Two of these components: information
presentation based on the user's preference and requirements and the identification of solutions
to the problem of secure information delivery across multiple security levels, will be discussed in
this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work investigates the behavior of a distributed team of agents on a dynamic distributed task allocation
problem. Previous work finds that a distributed decision making process can effectively assign tasks appropriately
to team members even when agents have only local information. We study this problem in a distributed
environment in which agents can move, thus causing local neighborhoods to change over time. Results indicate
that a higher level of adaptation is clearly required in the dynamic environment. Despite the increased difficulty,
the distributed team is able achieve comparable behavior in both static and dynamic environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Evolution of cultures is ultimately determined by mechanisms of the human mind. The paper discusses the mechanisms of evolution of language from primordial undifferentiated animal cries to contemporary conceptual contents. In parallel with differentiation of conceptual contents, the conceptual contents were differentiated from
emotional contents of languages. The paper suggests the neural brain mechanisms involved in these processes. Experimental evidence and theoretical arguments are discussed, including mathematical approaches to cognition and language: modeling fields theory, the knowledge instinct, and the dual model connecting language and cognition. Mathematical results are related to cognitive science, linguistics, and psychology. The paper gives an initial
mathematical formulation and mean-field equations for the hierarchical dynamics of both the human mind and culture.
In the mind heterarchy operation of the knowledge instinct manifests through mechanisms of differentiation and synthesis. The emotional contents of language are related to language grammar. The conclusion is an emotional version of Sapir-Whorf hypothesis. Cultural advantages of "conceptual" pragmatic cultures, in which emotionality of language is diminished and differentiation overtakes synthesis resulting in fast evolution at the price of self doubts and internal
crises are compared to those of traditional cultures where differentiation lags behind synthesis, resulting in cultural
stability at the price of stagnation. Multi-language, multi-ethnic society might combine the benefits of stability and fast
differentiation. Unsolved problems and future theoretical and experimental directions are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Personalized Assistant that Learns (PAL) Program is a Defense Advanced Research Projects Agency (DARPA) research effort that is advancing technologies in the area of cognitive learning by developing
cognitive assistants to support military users, such as commanders and decision makers. The Air Force Research Laboratory's (AFRL) Information Directorate leveraged several core PAL components and
applied them to the Web-Based Temporal Analysis System (WebTAS) so that users of this system can have automated features, such as task learning, intelligent clustering, and entity extraction. WebTAS is a
modular software toolset that supports fusion of large amounts of disparate data sets, visualization, project organization and management, pattern analysis and activity prediction, and includes various presentation aids. WebTAS is predominantly used by analysts within the intelligence community and with the addition
of these automated features, many transition opportunities exist for this integrated technology. Further, AFRL completed an extensive test and evaluation of this integrated software to determine its effectiveness for military applications in terms of timeliness and situation awareness, and these findings and conclusions,
as well as future work, will be presented in this report.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer network security has become a very serious concern of commercial, industrial, and military organizations
due to the increasing number of network threats such as outsider intrusions and insider covert activities.
An important security element of course is network intrusion detection which is a difficult real world problem
that has been addressed through many different solution attempts. Using an artificial immune system has been
shown to be one of the most promising results. By enhancing jREMISA, a multi-objective evolutionary algorithm
inspired artificial immune system, with a secondary defense layer; we produce improved accuracy of intrusion
classification and a flexibility in responsiveness. This responsiveness can be leveraged to provide a much more
powerful and accurate system, through the use of increased processing time and dedicated hardware which has
the flexibility of being located out of band.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advanced Approaches for Image and Audio Processing
Evolutionary algorithms (EAs) have been employed in recent years in the design of robust image transforms.
EAs attempt to improve the defining filter coefficients of a discrete wavelet transform (DWT) to improve image
quality for bandwidth-restricted surveillance applications, such as the transmission of images by swarms of
unmanned aerial vehicles (UAVs) over shared channels. Regardless of the specific algorithm employed, filter
coefficients are optimized over a common fitness landscape that defines allowable configurations that filters may
take. Any optimization algorithm attempts to identify highly-fit filter configurations within the landscape. The
evolvability of transform filters depends upon the ruggedness, deceptiveness, neutrality, and modality of the
underlying landscape traversed by the EA. We have previously studied the evolvability of image transforms for
satellite image processing with regards to ruggedness and deceptiveness. Here we examine the position of wavelet
coefficients within a landscape to determine whether optimization algorithms should be seeded near this position
or randomly seeded in the global landscape. Through examination of landscape deceptiveness, both near wavelet
coefficients and throughout the global range of the landscape, we determine that the neighborhood surrounding
the wavelet contains a greater concentration of highly fit solutions. EAs that concentrate their search effort in
this neighborhood have a better chance of identifying filters that improve upon standard wavelets. An improved
understanding of the underlying fitness landscape characteristics impacts the design of evolutionary algorithms
capable of identifying near-optimal image transforms suitable for deployment in defense and security applications
of image processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An important aspect of contemporary military communications in the design of robust image transforms
for defense surveillance applications. In particular, efficient yet effective transfer of critical
image information is required for decision making. The generic use of wavelets to transform an
image is a standard transform approach. However, the resulting bandwidth requirements can be
quite high, suggesting that a different bandwidth-limited transform be developed. Thus, our specific
use of genetic algorithms (GAs) attempts to replace standard wavelet filter coefficients with
an optimized transform filter in order to retain or improve image quality for bandwidth-restricted
surveillance applications. To find improved coefficients efficiently, we have developed a software engineered
distributed design employing a genetic algorithm (GA) parallel island model on small and
large computational clusters with multi-core nodes. The main objective is to determine whether
running a distributed GA with multiple islands would either give statistically equivalent results
quicker or obtain better results in the same amount of time. In order to compare computational
performance with our previous serial results, we evaluate the obtained "optimal" wavelet coefficients
on test images from both approaches which results in excellent comparative metric values.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Despite the advances in face recognition research, current face recognition systems are still not accurate or robust
enough to be deployed in uncontrolled environments. The existence of a pose and illumination invariant face
recognition system is still lacking. This research exploits the relationship between thermal infrared and visible
imagery, to estimate 3D face with visible texture from infrared imagery. The relationship between visible and
thermal infrared texture is learned using kernel canonical correlation analysis(KCCA), and then a 3D modeler is
used to estimate the geometric structure from predicted visual imagery. This research will find it's application
in uncontrolled environments where illumination and pose invariant identification or tracking is required at long
range such as urban search and rescue (Amber alert, missing dementia patient), and manhunt scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the last five to seven years the use of chat in military contexts has expanded quite significantly, in some cases
becoming a primary means of communicating time-sensitive data to decision makers and operators. For example, during
humanitarian operations with Joint Task Force-Katrina, chat was used extensively to plan, task, and coordinate predeployment
and ongoing operations. The informal nature of chat communications allows the relay of far more
information than the technical content of messages. Unlike formal documents such as newspapers, chat is often emotive.
"Reading between the lines" to understand the connotative meaning of communication exchanges is now feasible, and
often important. Understanding the connotative meaning of text is necessary to enable more useful automatic
intelligence exploitation. The research project described in this paper was directed at recognizing user connotations of
uncertainty and urgency. The project built a matrix of speech features indicative of these categories of meaning,
developed data mining software to recognize them, and evaluated the results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a methodology for multiframe image registration of airborne high resolution, multi-camera imagery. In the
absence of predetermined camera and lens models, parameters are optimally determined from imagery and known
ground reference locations. GPS and IMU data collected from the sensor platform and the identified camera model
parameters are used to perform an initial orthorectification and georeferencing of each image. Multiple KLT, Sift, or
featureless point-match correspondences are identified and validated using RANSAC techniques. Affine transform
hypothesis are then generated, inconsistent hypothesis are removed using a RANSAC approach, and a final optimal
transform is generated as the least squares optimal fit of the remaining correspondences. To eliminate long-term drift,
key frames are selected and cross-registered. Performance improvements can also be demonstrated using a mask to
eliminate correspondences not on the ground plane. This approach is illustrated using the 2007 AFRL Columbus Large
Image Format dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Partial Least Square Regression (PLSR) and Data-Driven High Dimensional Scaling (DD-HDS) are employed for the
prediction and the visualization of changes in polar lipid expression induced by different combinations of wild-type (wt)
p53 gene therapy and SN38 chemotherapy of U87 MG glioblastoma cells. A very detailed analysis of the gangliosides
reveals that certain gangliosides of GM3 or GD1-type have unique properties not shared by the others. In summary, this
preliminary work shows that data mining techniques are able to determine the modulation of gangliosides by different
treatment combinations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Motion induced artifacts represent a major problem in detection and diagnosis of breast cancer in dynamic
contrast-enhanced magnetic resonance imaging. The goal of this paper is to evaluate the performance of a
new motion correction algorithm based on different feature extraction techniques and subsequent classification
techniques. Based on several simulation results, we determined the optimal motion compensation parameters,
the optimal feature number and tested different classification techniques. Our results have shown that motion
compensation can improve in some cases classification results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new approach for the design of feature-extracting recognition networks that do not require expert
knowledge in the application domain. Feature-Extracting Recognition Networks (FERNs) are composed of
interconnected functional nodes (feurons), which serve as feature extractors, and are followed by a subnetwork of
traditional neural nodes (neurons) that act as classifiers. A concurrent evolutionary process (CEP) is used to search the
space of feature extractors and neural networks in order to obtain an optimal recognition network that simultaneously
performs feature extraction and recognition. By constraining the hill-climbing search functionality of the CEP on specific
parts of the solution space, i.e., individually limiting the evolution of feature extractors and neural networks, it was
demonstrated that concurrent evolution is a necessary component of the system. Application of this approach to a
handwritten digit recognition task illustrates that the proposed methodology is capable of producing recognition
networks that perform in-line with other methods without the need for expert knowledge in image processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real-time evolvable systems are possible with a hardware implementation of Genetic Algorithms (GA). We report the
design of an IP core that implements a general purpose GA engine which has been successfully synthesized and verified
on a Xilinx Virtex II Pro FPGA Device (XC2VP30). The placed and routed IP core has an area utilization of only 13%
and clock speed of 50MHz. The GA core can be customized in terms of the population size, number of generations,
cross-over and mutation rates, and the random number generator seed. The GA engine can be tailored to a given
application by interfacing with the application specific fitness evaluation module as well as the required storage memory
(to store the current and new populations). The core is soft in nature i.e., a gate-level netlist is provided which can be
readily integrated with the user's system. The GA IP core can be readily used in FPGA based platforms for space and
military applications (for e.g., surveillance, target tracking). The main advantages of the IP core are its programmability,
small footprint, and low power consumption. Examples of concept systems in sensing and surveillance domains will be
presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Numerous man-made and natural disasters have stricken mankind since the beginning of the new millennium. The scale
and impact of such disasters often prevent the collection of sufficient data for an objective assessment and coordination
of timely rescue and relief missions on the ground. As a potential solution to this problem, in recent years constellations
of Earth observation small satellites and in particular micro-satellites (<100 kg) in low Earth orbit have emerged as an
efficient platform for reliable disaster monitoring. The main task of the Earth observation satellites is to capture images
of the Earth surface using various techniques. For a large number of applications the resulting delay between image
capture and delivery is not acceptable, in particular for rapid response remote sensing aiming at disaster monitoring and
detection. In such cases almost instantaneous data availability is a strict requirement to enable an assessment of the
situation and instigate an adequate response. Examples include earthquakes, volcanic eruptions, flooding, forest fires and
oil spills. The proposed solution to this issue are low-cost networked distributed satellite systems in low Earth orbit
capable of connecting to terrestrial networks and geostationary Earth orbit spacecraft in real time. This paper discusses
enabling technologies for rapid response disaster monitoring and detection from space such as very small satellite design,
intersatellite communication, intelligent on-board processing, distributed computing and bio-inspired routing techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES),
of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the
9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and
reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces
the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while
maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In
addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from
the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital
photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our
evolved transform greatly improves the quality of reconstructed images without substantial loss of compression
capability over a broad range of image classes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The rapid advancements in ad hoc sensor networks, MEMS (micro-electro-mechanical systems) devices, low-power
electronics, adaptive hardware and systems (AHS), reconfigurable architectures, high-performance computing platforms,
distributed operating systems, micro-spacecrafts, and micro-sensors have enabled the design and development of a highperformance
satellite sensor network (SSN). Due to the changing environment and the varying missions that a SSN may
have, there is an increasing need to develop efficient strategies to design, operate, and manage the system at different
levels from an individual satellite node to the whole network. Towards this end, this paper presents an adaptive
approach to space-based picosatellite sensor network by exploiting efficient bio-inspired optimization algorithms,
particularly for solving multi-objective optimization problems at both local (node) and global (network) system levels.
The proposed approach can be hierarchically used for dealing with the challenging optimization problems arising from
the energy-constrained satellite sensor networks. Simulation results are provided to demonstrate the effectiveness of the
proposed approach through its application in solving both node-level and system-level optimization problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Efficient Global Optimization (EGO) is a competent evolutionary algorithm which can be useful for problems with
expensive cost functions [1,2,3,4,5]. The goal is to find the global minimum using as few function evaluations as
possible. Our research indicates that EGO requires far fewer evaluations than genetic algorithms (GAs). However, both
algorithms do not always drill down to the absolute minimum, therefore the addition of a final local search technique is
indicated. In this paper, we introduce three "endgame" techniques. The techniques can improve optimization efficiency
(fewer cost function evaluations) and, if required, they can provide very accurate estimates of the global minimum. We
also report results using a different cost function than the one previously used [2,3].
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Chromosome design has been shown to be a crucial element in developing genetic algorithms which approach global
solutions without premature convergence. The consecutive positioning of parameters with high-correlations and
relevance enhances the creation of genetic building blocks which are likely to persist across recombination to provide
genetic inheritance. Incorporating positional gene relevance is challenging, however, in multi-dimensional design
problems. We present a hybrid chromosome designed for optimizing a fragmented patch antenna which combines
linear and two-dimensional gene representations. We compare previous results obtained with a linear chromosome to
solutions obtained with this new hybrid representation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Numerical simulations are used to improve in-band disruption of a phase-locked loop (PLL). Disruptive inputs
are generated by integrating a system of nonlinear ordinary differential equations (ODEs) for a given set of
parameters. Each integration yields a set of time series, of which one is used to modulate a carrier input to the
PLL. The modulation is disruptive if the PLL is unable to accurately reproduce the modulation waveform. We
view the problem as one of optimization and employ an evolutionary algorithm to search the parameter space of
the excitation ODE for those inputs that increase the phase error of the PLL subject to restrictions on excitation
amplitude or power. Restricting amplitude (frequency deviation) yields a modulation that approximates a
square wave. Constraining modulation power leads to a chaotic excitation that requires less power to disrupt
loop operation than either the sinusoid or square wave modulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wireless (virtual) synapses represent a novel approach to bio-inspired neural networks that follow the infrastructure
of the biological brain, except that biological (physical) synapses are replaced by virtual ones based on cellular
telephony modeling. Such synapses are of two types: intracluster synapses are based on IR wireless ones, while
intercluster synapses are based on RF wireless ones. Such synapses have three unique features, atypical of
conventional artificial ones: very high parallelism (close to that of the human brain), very high reconfigurability
(easy to kill and to create), and very high plasticity (easy to modify or upgrade). In this paper we analyze the general
concept of wireless synapses with special emphasis on RF wireless synapses. Also, biological mammalian
(vertebrate) neural models are discussed for comparison, and a novel neural lensing effect is discussed in detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While genetic algorithms are powerful optimization tools, they typically require many function space evaluations. This
makes their utilization limited when the time per evaluation is significant. We discuss one such application, the
optimization of antenna positioning on ship-board platforms. We present the issues involved and propose intelligent
preprocessing and genetic algorithm modifications which reduce both function evaluation time and the extent and
complexity of the function space. While these strategies were developed for this particular application, most would be
suitable for other complex military optimization problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we evaluate several methods to register and stabilize a motion imagery video sequence under the layered
sensing construction. Layered sensing is a new construct in the repertoire of the US Air Force. Under the layered
sensing paradigm, an area is surveyed by a multitude of sensors at many different altitudes and operating across many
modalities. This combination of sensors provides better insight into a situation than could ever be achieved with a single
sensor. A fundamental requirement to utilize this technology is to first register and stabilize the data from each of the
individual sensors. The contribution of this paper is to explore and provide a preliminary evaluation of techniques for
image registration of Electro-Optical (EO) video sequences taken from Wide Area Persistent Surveillance (WAPS)
platforms whose views are centered on a city. Additionally, evaluation metrics for such techniques are described and
explored.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advances in multi-sensor automated target recognition, tracking, and Wide Area Persistent Surveillance promise to
enable a broad spectrum of intent and behavior recognition models. However, a significant gap remains between
coordinated behavior analyses tools working at high information abstraction and target identification and tracking
systems working with direct inputs from motion imagery. In this paper, we describe the problem of modeling adversarial
behavior signatures faced by Air Force researchers, present a range of solutions to automate the discovery of behavior
patterns, and outline the gap in the research space to enable efficient integration of multi-level information exploitation,
analyses and sensor management tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Artificial neural networks (ANNs) are used to determine the state-of-health (SOH) of the Automated Radioxenon
Analyzer/Sampler (ARSA). ARSA is a gas collection and analysis system used for non-proliferation monitoring in
detecting radioxenon released during nuclear tests. SOH diagnostics are important for automated, unmanned sensing
systems so that remote detection and identification of problems can be made without onsite staff. Both recurrent and
feed-forward ANNs are presented. The recurrent ANN is trained to predict sensor values based on current valve states,
which control air flow, so that with only valve states the normal SOH sensor values can be predicted. Deviation between
modeled value and actual is an indication of a potential problem. The feed-forward ANN acts as a nonlinear version of
principal components analysis (PCA) and is trained to replicate the normal SOH sensor values. Because of ARSA's
complexity, this nonlinear PCA is better able to capture the relationships among the sensors than standard linear PCA
and is applicable to both sensor validation and recognizing off-normal operating conditions. Both models provide
valuable information to detect impending malfunctions before they occur to avoid unscheduled shutdown. Finally, the
ability of ANN methods to predict the system state is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We discuss the development of Position-Adaptive Sensors [1] for purposes for detecting embedded chemical substances
in challenging environments. This concept is a generalization of patented Position-Adaptive Radar Concepts developed
at AFRL for challenging conditions such as urban environments. For purposes of investigating the detection of
chemical substances using multiple MAV (Micro-UAV) platforms, we have designed and implemented an experimental
testbed with sample structures such as wooden carts that contain controlled leakage points. Under this general concept,
some of the members of a MAV swarm can serve as external position-adaptive "transmitters" by blowing air over the
cart and some of the members of a MAV swarm can serve as external position-adaptive "receivers" that are equipped
with chemical or biological (chem/bio) sensors that function as "electronic noses". The objective can be defined as
improving the particle count of chem/bio concentrations that impinge on a MAV-based position-adaptive sensor that
surrounds a chemical repository, such as a cart, via the development of intelligent position-adaptive control algorithms.
The overall effect is to improve the detection and false-alarm statistics of the overall system.
Within the major sections of this paper, we discuss a number of different aspects of developing our initial MAV-Based
Sensor Testbed. This testbed includes blowers to simulate position-adaptive excitations and a MAV from Draganfly
Innovations Inc. with stable design modifications to accommodate our chem/bio sensor boom design. We include details
with respect to several critical phases of the development effort including development of the wireless sensor network
and experimental apparatus, development of the stable sensor boom for the MAV, integration of chem/bio sensors and
sensor node onto the MAV and boom, development of position-adaptive control algorithms and initial tests at IDCAST
(Institute for the Development and Commercialization of Advanced Sensor Technologies), and autonomous positionadaptive
chem/bio tests and demos in the MAV Lab at AFRL Air Vehicles Directorate. For this particular MAV
implementation of chem/bio sensors, we selected miniature Methane, Nitrogen Dioxide, and Carbon Monoxide sensors.
To safely simulate the behavior of chem/bio substances in our laboratory environment, we used either cigarette smoke
or incense. We present a set of concise parametric results along with visual demonstration of our new position-adaptive
sensor capability. Two types of experiments were conducted: with sensor nodes screening the chemical contaminant
(cigarette smoke or incense) without MAVs, and with a sensor node integrated with the MAV. It was shown that the
MOS-based chemical sensors could be used for chemical leakage detection, as well as for position-adaptive sensors on
air/ground vehicles as sniffers for chemical contaminants.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The MITRE Sensor Layer Prototype is an initial design effort to enable every sensor to help create new capabilities
through collaborative data sharing. By making both upstream (raw) and downstream (processed) sensor data visible,
users can access the specific level, type, and quantities of data needed to create new data products that were never
anticipated by the original designers of the individual sensors.
The major characteristic that sets sensor data services apart from typical enterprise services is the volume (on the order
of multiple terabytes) of raw data that can be generated by most sensors. Traditional tightly coupled processing
approaches extract pre-determined information from the incoming raw sensor data, format it, and send it to predetermined
users. The community is rapidly reaching the conclusion that tightly coupled sensor processing loses too
much potentially critical information.1 Hence upstream (raw and partially processed) data must be extracted, rapidly
archived, and advertised to the enterprise for unanticipated uses.
The authors believe layered sensing net-centric integration can be achieved through a standardize-encapsulate-syndicateaggregate-
manipulate-process paradigm. The Sensor Layer Prototype's technical approach focuses on implementing this
proof of concept framework to make sensor data visible, accessible and useful to the enterprise. To achieve this, a "raw"
data tap between physical transducers associated with sensor arrays and the embedded sensor signal processing hardware
and software has been exploited. Second, we encapsulate and expose both raw and partially processed data to the
enterprise within the context of a service-oriented architecture.
Third, we advertise the presence of multiple types, and multiple layers of data through geographic-enabled Really Simple
Syndication (GeoRSS) services. These GeoRSS feeds are aggregated, manipulated, and filtered by a feed aggregator.
After filtering these feeds to bring just the type and location of data sought by multiple processes to the attention of each
processing station, just that specifically sought data is downloaded to each process application.
The Sensor Layer Prototype participated in a proof-of-concept demonstration in April 2008. This event allowed multiple
MITRE innovation programs to interact among themselves to demonstrate the ability to couple value-adding but
previously unanticipated users to the enterprise. For this event, the Sensor Layer Prototype was used to show data
entering the environment in real time. Multiple data types were encapsulated and added to the database via the Sensor
Layer Prototype, specifically National Imagery Transmission Format 2.1 (NITF), NATO Standardization Format 4607
(STANAG 4607), Cursor-on-Target (CoT), Joint Photographic Experts Group (JPEG), Hierarchical Data Format
(HDF5) and several additional sensor file formats describing multiple sensors addressing a common scenario.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a unique method for converting traditional cellular automata (CA) rules into analytical function form. CA
rules have been successfully used for morphological image processing and volumetric shape recognition and
classification. Further, the use of CA rules as analog models to the physical and biological sciences can be significantly
extended if analytical (as opposed to discrete) models could be formulated. We show that such transformations are
possible. We use as our example John Horton Conway's famous "Game of Life" rule set. We show that using Data
Modeling, we are able to derive both polynomial and bi-spectrum models of the IF-THEN rules that yield equivalent
results. Further, we demonstrate that the "Game of Life" rule set can be modeled using the multi-fluxion, yielding a
closed form nth order derivative and integral. All of the demonstrated analytical forms of the CA rule are general and
applicable to real-time use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.