PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A recursive multisensor association algorithm has been developed based on fuzzy logic. It simultaneously determines fuzzy grades of membership and fuzzy cluster centers. It is capable of associating data from various sensor types and in its simplest form makes no assumption about noise statistics as many association algorithms do. The algorithm is capable of performing without operator intervention. It associates data from the same target for multiple sensor types. The algorithm also provides an estimate of the number of targets present, reduced noise estimates of the quantities being measured, and a measure of confidence to assign to the data association. A comparison of the algorithm to a more conventional Bayesian association algorithm is provided. The data from both the ESM and radar systems is noisy and the ESM data is intermittent. The radar data has probability of detection less than unity. The effect of a large number of targets being present in the data on parameter estimation, determination of the number of targets and multisensor data association is examined. A method for determining sliding data window size based on fuzzy clustering for the multitarget case is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present the development of a multisensor fusion algorithm using multidimensional data association for multitarget tracking. The work is motivated by a large scale ground target surveillance problem, where observations from multiple asynchronous sensors with time-varying sampling intervals (e.g., electronically scanned array radars) are used for centralized fusion. The combination of multisensor fusion with multidimensional assignment is done such as to maximize the 'time-depth,' in addition to 'sensor-width' for the number S of lists handled by the assignment algorithm. The time-depth results from the simultaneous use of multiple frames of measurements obtained at different time instants. The sensor- width comes from the geographically distributed nature of the sensors. A procedure, which guarantees maximum effectiveness for an S-dimensional data association (S greater than or equal to 3), i.e., maximum time-depth (S-1) for each sensor without sacrificing the fusion across sensors, is presented. Using a sliding-window technique (of length S), the estimates are updated after each frame of measurements. The algorithm provides a systematic approach to automatic track formation, maintenance and termination for multitarget tracking using multisensor fusion with multidimensional assignment for data association. Estimation results are presented for simulated data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Joint Multitarget Probability (JMP) is a posterior probability density pT(x1,...,xTZ) that there are T targets (T an unknown number) with unknown locations specified by the multitarget state X equals (x1,...,xT)T conditioned on a set of observations Z. This paper presents a numerical approximation for implementing JMP in detection, tracking and sensor management applications. A problem with direct implementation of JMP is that, if each xt, t equals 1,...,T, is discretized on a grid of N elements, NT variables are required to represent JMP on the T-target sector. This produces a large computational requirement even for small values of N and T. However, when the sensor easily separates targets, the resulting JMP factorizes and can be approximated by a product representation requiring only O(T2N) variables. Implementation of JMP for multitarget tracking requires a Bayes' rule step for measurement update and a Markov transition step for time update. If the measuring sensor is only influenced by the cell it observes, the JMP product representation is preserved under measurement update. However, the product form is not quite preserved by the Markov time update, but can be restored using a minimum discrimination approach. All steps for the approximation can be performed with O(N) effort. This notion is developed and demonstrated in numerical examples with at most two targets in a 1-dimensional surveillance region. In this case, numerical results for detection and tracking for the product approximation and the full JMP are very similar.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the tasks associated with a heterogeneous, multi-sensor system is the determination of which function to perform (search, track, or identification). Previous work by the authors has focused on the use of the information gain attributable to a reduction in the kinematic, identification, or search uncertainty as a useful cost function for making the trade-off among the possible uses of a sensor. This view has been subsumed as a subcomponent of a new approach (not covered here) which quantitatively apportions goal-values ordered in a lattice among the several tasks. That is, rather than use an information measure to determine whether to search, track, or identification, an information instantiator is used to determine when to schedule the next observations of a target in order to insure that tracks are maintained, areas of uncertainty are searched, and important targets identified. The scheduling of these observations among the various sensors is optimized separately using the previously developed OGUPSA algorithm. Information instantiation is a collection of methods used to convert information needs at the mission management level to the actual type of measurement(s) to make. This paper describes these methods which are used to schedule measurements of search areas with associated probabilities of detection to meet search information needs, obtain measurements of a target in track to reduce its kinematic uncertainty to a specified level, and to reduce the uncertainty about a target's identity as both a specific information gain in identification and capitalize on that ID to increase target track accuracy. A brief description and block diagram of our complete sensor management model is also presented to show the interrelationship of the information instantiator to the other components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The tracking of maneuvering targets is complicated by the fact that acceleration is not directly observable or measurable. Additionally, acceleration can be induced by a variety of sources including human input, autonomous guidance, or atmospheric disturbances. The approaches to tracking maneuvering targets can be divided into two categories both of which assume that the maneuver input command is unknown. One approach is to model the maneuver as a random process. The other approach assumes that the maneuver is not random and that it is either detected or estimated in real time. The random process models generally assume one of two statistical properties, either white noise or an autocorrelated noise. The multiple-model approach is generally used with the white noise model while a zero-mean, exponentially correlated acceleration approach is used with the autocorrelated noise model. The nonrandom approach uses maneuver detection to correct the state estimate or a variable dimension filter to augment the state estimate with an extra state component during a detected maneuver. Another issue with the tracking of maneuvering target is whether to perform the Kalman filter in Polar or Cartesian coordinates. This paper will examine and compare several exponentially correlated acceleration approaches in both Polar and Cartesian coordinates for accuracy and computational complexity. They include the Singer model in both Polar and Cartesian coordinates, the Singer model in Polar coordinates converted to Cartesian coordinates, Helferty's third order rational approximation of the Singer model and the Bar-Shalom and Fortmann model. This paper shows that these models all provide very accurate position estimates with only minor differences in velocity estimates and compares the computational complexity of the models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a unified, theoretically rigorous approach for measuring the performance of data fusion algorithms, using information theory. The proposed approach is based on 'finite-set statistics' (FISST), a direct generalization of conventional statistics to multisource, multitarget problems. FISST makes it possible to directly extend Shannon-type information metrics to multisource, multitarget problems. This can be done, moreover, in such a way that mathematical 'information' can be defined and measured even though an evaluator/end-user may have conflicting or even subjective definitions of what 'informative' means. The result is a scientifically defensible means of (1) comparing the performance of two algorithms with respect to a 'level playing field' when ground truth is known; (2) estimating the internal on-the-fly effectiveness of a given algorithm when ground truth is not known; and (3) dynamically choosing between algorithms (or different modes of a multi-mode algorithm) on the basis of the information content they provide.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Technical developments to our work in the area of communications management in decentralized data fusion systems are described. These include combining identification and tracking sub-systems into a single simulator, and increasing the complexity of the sensing model, communication model, and target/platform trajectories. These technological developments are employed to indicate which is the most appropriate communications management philosophy: (1) one based on identification only, (2) one based on tracking only, or (3) one based on identification and tracking in combination. The paper concludes that, for the scenario investigated, communications management based purely on identification provides good identification performance but poor track performance. The converse was true when the management was based purely on track information. However, when the communication management decision philosophy was based on both identification and track, good performance in both sub-systems was achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper explores the concept of an application of semiotic principles to the design of a multisensor-multilook fusion system. Semiotics is an approach to analysis that attempts to process media in a united way using qualitative methods as opposed to quantitative. The term semiotic refers to signs, or signatory data that encapsulates information. Semiotic analysis involves the extraction of signs from information sources and the subsequent processing of the signs into meaningful interpretations of the information content of the source. The multisensor fusion problem predicated on a semiotic system structure and incorporating semiotic analysis techniques is explored and the design for a multisensor system as an information fusion system is explored. Semiotic analysis opens the possibility of using non-traditional sensor sources and modalities in the fusion process, such as verbal and textual intelligence derived from human observers. Examples of how multisensor/multimodality data might be analyzed semiotically is shown and discussion on how a semiotic system for multisensor fusion could be realized is outlined. The architecture of a semiotic multisensor fusion processor that can accept situational awareness data is described, although an implementation has not as yet been constructed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A relative sensors-to-target geometry measure-of-merit (MOM), based on the Geometric Dilution of Precision (GDOP) measure, is developed. The method of maximum likelihood estimation is introduced for the solution of the position location problem. A linearized measurement model-based error sensitivity analysis is used to derive an expression for the GDOP MOM. The GDOP MOM relates the sensor measurement errors to the target position errors as a function of sensors-to-target geometry. In order to illustrate the efficacy of GDOP MOM for fusion systems, GDOP functional relationships are computed for bearing-only measuring sensors-to-target geometries. The minimum GDOP and associated specific target-to-sensors geometries are computed and illustrated for both two and three bearing-only measuring sensors. Two and three-dimensional plots of relative error contours provide a geometric insight to sensor placement as a function of geometry induced error dilution. The results can be used to select preferred target- to-sensor(s) geometries for M sensors in this application. The GDOP MOM is general and is readily extendable to other measurement-based sensors and fusion architectures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Fusion Processor Simulation (FPSim) is being developed by Rome Laboratory to support the Discrimination Interceptor Technology (DITP) and Advanced Sensor Technology (ASTP) Programs of the Ballistic Missile Defense Organization. The purpose of the FPSim is to serve as a test bed and evaluation tool for establishing the feasibility of achieving threat engagement timelines. The FPSim supports the integration, evaluation, and demonstration of different strategies, system concepts, and Acquisition Tracking & Pointing (ATP) subsystems and components. The environment comprises a simulation capability within which users can integrate and test their application software models, algorithms and databases. The FPSim must evolve as algorithm developments mature to support independent evaluation of contractor designs and the integration of a number of fusion processor subsystem technologies. To accomplish this, the simulation contains validated modules, databases, and simulations. It possesses standardized engagement scenarios, architectures and subsystem interfaces, and provides a hardware and software framework which is flexible to support growth, reconfigurration, and simulation component modification and insertion. Key user interaction features include: (1) Visualization of platform status through displays of the surveillance scene as seen by imaging sensors. (2) User-selectable data analysis and graphics display during the simulation execution as well as during post-simulation analysis. (3) Automated, graphical tools to permit the user to reconfigure the FPSim, i.e., 'Plug and Play' various model/software modules. The FPSim is capable of hosting and executing user's software algorithms of image processing, signal processing, subsystems, and functions for evaluation purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Scale model Rocket Experiments (SRE) were conducted in August and September 1997 as a part of the Ballistic Missile Defense Organization (BMDO) Advanced Sensor Technology Program (ASTP) and Discriminating Interceptor Technology Program (DITP). Rome Laboratory (RL) efforts under this effort for ASTP involves the following technology areas: sensor fusion algorithms, high performance processors, and sensor modeling and simulation. In support of the development, test and integration of these areas, Rome Laboratory performed the scale model rocket experiments. This paper details the experiments and results of the scaled rocket experiment as a cost effective, risk reduction experiment to test fusion processor algorithms in a real time environment. The goals of the experiment were to launch, track, fuse, and collect multispectral data from Visible, IR, RADAR and LADAR sensors. The data was collected in real time and was interfaced to the RL-HPC (PARAGON) for real time processing. In June 1997 RL performed the first tests of the series on static targets. The static firings tested data transfers and safety protocols. The RL (Hanscom) IR cameras were calibrated and the proper gain settings were acquired. The next phase of the SRE testing, August 12/13 1997, involved the launching, tracking and acquiring digital IR data into the HPC. In September, RL implemented the next phase of the experiments by incorporating a LADAR and an additional IR sensor from Phillips Laboratory into the system. This paper discusses the success and future work of the SRE.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for recognition of unknown targets using large databases of model targets is discussed. Our approach is based on parallel processing of multi-class hash databases that are generated off-line. A geometric hashing technique is used on feature points of model targets to create each class database. Bit level coding is then performed to represent the models in an image format. Parallelism is achieved during the recognition phase. Feature points of an unknown target are passed to parallel processors each accessing an individual class database. Each processor reads a particular class of hash data base and indexes feature points of the unknown target. A simple voting technique is applied to determine the best match model with the unknown. The paper discusses our technique and the results from testing with unknown FLIR targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two texture-based and one amplitude-based features are evaluated as detection statistics for synthetic aperture radar (SAR) imagery. The statistics include a local variance, an extended fractal, and a two-parameter CFAR feature. The paper compares the effectiveness of focus of attention (FOA) algorithms that consist of any number of combinations of the three statistics. The public MSTAR database is used to derive receiver-operator-characteristic (ROC) curves for the different detectors at various signal-to-clutter rations (SCR). The database contains one foot resolution X-band SAR imagery. The results in the paper indicate that the extended fractal statistic provides the best target/clutter discrimination, and the variance statistic is the most robust against SCR. In fact, the extended fractal statistic combines the intensity difference information used also by the CFAR feature with the spatial extent of the higher intensity pixels to generate an attractive detection statistics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the application of Hidden Markov Models (HMMs) to the Automatic Target Recognition (TRI-ATR) problem in Synthetic Aperture Radar (SAR) imagery. Related research with applications of the HMMs to solve SAR Automatic Target Recognition (ATR) problems can also be found in Kottke et al. Our approach is based on a cascade of three stages: preprocessing, feature extraction and selection, and classification. Preprocessing and feature extraction and selection involve operations performed on the Radon transform of target chips. The features, which are invariant to changes in rotation, position and shifts, although not to changes in scale, are optimized through the use of feature selection techniques. The classification stage takes as its inputs the multidimensional multiple observation sequences and parameterizes them statistically using continuous density models to capture the target and background appearance variability. Experimental results have demonstrated that the recognition rate can be as high as 95% over both the training set and the testing set, in certain cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Resolution is a fundamental limitation of any processing based on radar data. Conventional radar imaging techniques, in general, make use of the FFT to determine the spatial location of a target from its scattered field. The resolution of these images is limited by the bandwidth of the interrogating radar system and the aspect angle sector over which the target is observed. In such cases, superresolution offers the potential to improve system performance by increasing the resolution. Superresolution is the process of increasing the effective bandwidth of an image (or time series) by introducing collateral data to augment the dataset; thus the Rayleigh resolution imposed by the size of the dataset is overcome by the introduction of the synthetic collateral data. This paper presents a state of the art survey of radar superresolution, applicable to both 1-D and 2-D data and to both the discrete and distributed cases. It presents a comparison superresolution algorithms using real and simulated data sets. It also presents specific applications of superresolution for air-to-ground surveillance, data resolution enhancement, SAR ATR and FOPEN ATR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The premise of foveal vision is that surveying a large area with low resolution to detect regions of interest, followed by their verification with localized high resolution, is a more efficient use of computational and communications throughput than resolving the area uniformly at high resolution. This paper presents target/clutter discrimination techniques that support the foveal multistage detection and verification of infrared-sensed ground targets in cluttered environments. The first technique uses a back-propagation neural network to classify narrow field-of-view high acuity image chips using their projection onto a set of principal components as input features. The second technique applies linear discriminant analysis on the same input features. Both techniques include refinements that address generalization and detected region of interest position errors. Experimental results using second generation forward looking infrared imagery are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The difficult task of target recognition through thermal sights is critical to battlefield success and reduction of fratricide. All final decisions to shoot or not shoot reside with the human operator. Until now, little insight into effective human visual discrimination was applied to the thermal signature identification task. A new multimedia training software was developed at NVESD which trains thermal signatures based upon cognitive neuroscientific understanding of object recognition. An experiment with 109 soldiers at Ft. Hood, Texas indicates that the software effectively trains the shapes and locations of emissive sources (mainly engines and exhaust hot spots) of tactical ground vehicles, significantly improving combat vehicle recognition. Incorporation of object recognition theory into Assisted Target Recognition (ATR) implementations may yield significant improvements for detection and recognition of vehicle targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper texture classification is studied based on the fractal dimension (FD) of filtered versions of the image and the Fuzzy ART Map neural network (FAMNN). FD is used because it has shown good tolerance to some image transformations. We implemented a variation of the testing phase of Fuzzy ARTMAP that exhibited superior performance than the standard Fuzzy ARTMAP and the 1-nearest neighbor (1-NN) in the presence of noise. The performance of the above techniques is tested with respect to segmentation of images that include more than one texture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This report is based on trails where number of trained test persons were given the task of deciding at what level of noise and spatial degradation they could identify fighter aircraft in computer generated images. The test persons gave a confidence level to those answers. This enabled finding the probability of correct identification as a function of noise and blurring. The report shows the results of some 6000 tests with 23 fighter aircraft seen in three different aspects: top, side and front. Chapter 1 of this document is introduction. Chapter 2 gives the theoretical background and the algebraic definitions of the quantities used. Chapter 3 describes the software and the test process used in the tests. Chapter 4 gives the results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic Target Recognition is typically based on single frame image processing. In this paper we report about our work in improving ATR performance by the exploitation of image sequences using a combination of target detection and tracking. The proposed detection/tracking system consists of three subsystems: (1) the target detection module which is based on a combination of multiresolution neural network target filters which are combined by a probabilistic belief network; (2) the sensor motion compensation system which generates a dense velocity field over the actual image frame, thus estimating the effect of the unknown sensor platform motion in image coordinates and (3) a multi-target-tracker which associates existing target tracks with new observations. By the hand of real world examples we show that the combined detection/tracking method overcomes the problem of spurious false alarms generated by the single frame target detector.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the problem of prioritizing, i.e., preserve with higher fidelity, region-of-interest during image compression. Regions-of-interest are found, for example, in medical imagery where only a small area is useful for diagnostic, or in surveillance images where targets have to be identified and tracked. These ROI are often characterized by their fine details which therefore need to be preserved if the image is to be of any use after it is decompressed. Wavelet- based image compression is appropriate for such tasks because of its localization property. We present an algorithm, based on Shapiro's popular EZW (Embedded image coding using Zerotree of Wavelet coefficients) to prioritize region-of-interest. A non-uniform quantizer with smaller steps for smaller coefficients is used on the coefficients of the ROI. This allows to transmit initially the fine details of the ROI and to use successive approximation quantization to reduce the quantization error on larger coefficients of the image, ROI or non-ROI. Simulation results show that this approach allows to efficiently preserve the fine details of the ROI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we discuss the possibility of using artificial neural networks (ANNs) as feature detectors in automatic target recognition (ATR). The goal is to discern a vehicle in an infrared image. We train ANNs to recognize the most easily recognizable parts of the vehicles, the wheels. The specific ANNs we use, shared weight ANNs, are especially adept at such an image recognition task due to their specialized architecture. The feature detection stage results in an image containing in each pixel the output of the ANN, indicating its confidence in the classification. We can then use a simple sequence of image processing algorithms on this image to find peaks and, by counting the number of these peaks, vehicles. This system is tested on sensitivity to scale differences and background clutter and is shown to perform quite well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Federal Aviation Administration is examining a variety of technologies for augmenting surface radar detection at airports. This paper describes the testing of infrared cameras and operational concepts to improve detection and tracking of targets on airport surfaces. Three different cameras were tested during summer and winter months and during inclement weather. Two operational concepts were tested at Dulles International Airport. A prototype image processing system is described that extracts target coordinates from camera video output and passes them to an AMASS simulator for fusion with radar and other target tracking data. All three cameras evaluated were able to detect and recognize a variety of targets on a runway surface including humans, vehicles, and small and large airplanes. The range to detection and recognition varies with each camera's instantaneous FOV, thermal sensitivity, atmospheric conditions and operating conditions. Each camera was found to meet specific FAA requirements in unique ways.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a data fusion-based approach to designing an Automated Fingerprint Identification System (AFIS). Fingerprint matching methods vary from pattern matching, using ridge structure, orientation, or even the entire fingerprint itself, to point critical matching, using localized features such as ridge discontinuities, e.g. minutiae, or porous structures. Localized matching methods, such as minutiae, tend to yield more compact templates, in general, than pattern based methods. However, the reliability of localized features may be an issue, since they are affected adversely by the quality of the captured fingerprint, i.e. the degree of noise. Minutiae-based matching methods tend to be slower, albeit more accurate, than pattern-based methods. The trade-off in designing a cost-effective AFIS in terms of processing power (CPU) used, matching speed, and accuracy, lies in the choice of the proper matching methods that are selected to optimize performance by maximizing the matching accuracy while minimizing the search time. In this paper we present a systematic design and study of a fusion-based AFIS using a multiplicity of matching methods to optimize system performance and minimize required CPU cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of a Markov random field for both the restoration and segmentation of images is well known. It has also been shown that this framework can be extended and allow the fusion of data extracted from several images all registered with each other but from different sensors. The main limitation of these fusion methods is that they rely on the use of stochastic sampling methods and consequently a prohibitively slow, even with the use of dedicated processors. This has prevented the easy use of these methods in real time systems. Here a new approach to the fusion problem is taken. An alternative construction for the Markov random field is used. This concentrates only on the construction of the image boundary map, leaving the pixel values fixed. This coupled with the use of an appropriately designed Iterative Conditional Modes (ICM) algorithm, produces an algorithm which is significantly less expensive and, with the correct processor, it is hoped may be operated in real time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This survey paper presents several different methods for combining multiple mine detection sensors on a vehicle. There are many methods of classifier combination that have been proposed recently that may, more generally, be applied to the problem of sensor fusion. These sensor fusion algorithms include the majority vote, unanimous consensus, thresholded voting, polling methods which utilize heuristic decision rules, the averaged Bayes classifier, applying logistic regression to the outputs of each classifier, and using Dempster-Shafer theory to derive weights for each sensor's vote.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports on the fusion of IR imagery with range data such as obtained from a laser range finder. Both an air/sea, and land based scenario has been studied. The range information is used to calculate a priori scale information for the detection process in the IR images. The use of this scale information leads to substantial improvement of the recognition performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Within a Surveillance and Reconnaissance System, the Fusion Process is an essential part of the software package since the different sensors measurements are combined by this process; each sensor sends its data to a fusion center whose task is to elaborate the best tactical situation. In this paper, a practical algorithm of data fusion applied to a military application context is presented; the case studied here is a medium-range surveillance situation featuring a dual-sensor platform which combines a surveillance Radar and an IRST; both sensors are collocated. The presented performances were obtained on validation scenarios via simulations performed by SAGEM with the ESSOR ('Environnement de Simulation de Senseurs Optroniques et Radar') multisensor simulation test bench.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Helicopters flying at low altitude in the visual flight rules often crash against obstacles such as a power transmission line. This paper describes the image sensors to detect obstacles and the several image processing techniques to derive and enhance the targets in the images. The images including obstacles were collected both on the ground and by air using an infrared (IR) camera and a color video camera in different backgrounds, distances, and weather conditions. Collected results revealed that IR images have an advantage over color images to detect obstacles in many environments. Several image processing techniques have been evaluated to improve the qualities of collected images. For example, fusion of IR and color images, several filters, such as the Median filter or the adaptive filter have been tested. Information that the target is thin and long, which characterizes the shape of power lines, has been introduced to derive power lines. It has been shown that these processes can greatly reduce the noise and enhance the contrast, no matter how the background is. It has also been demonstrated that there is a good prospect that these processes will help develop the algorithm for automatic obstacle detection and warning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In multiple-sensor fusion several sensors acquire the same information, and thereby increase the reliability and accuracy of the information through a systematic process of combination of the sensory data obtained from different sources. The accuracy of the sensory data depends upon the precision of sensor and the environmental states in which the sensor operates. Most of the existing sensor integration systems (SIS) use some form of statistical techniques and hence lack the flexibility to change or replace inaccurate sensor(s). This paper presents an Intelligent Sensor Integrated System (ISIS) approach that uses the knowledge database of the sensors and allows for the changing or replacing of sensor(s). This proposed system uses fuzzy logic to achieve this objective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper studies a design method of decentralized signal detection system which consists of the adaptive fuzzied local detectors and a data fusion rule of self-learning the weights on-line. The local detectors for the inaccurate signal parameters are modeled by means of fuzzy sets. Such a model can be adapted to change of the inaccurate signal parameters. The data fusion center can learn itself the local decision weights on-line based on the optimal decision rules. The combination the robustness of the fuzzied local detectors and the adaptability of the self-learned fusion rule make it true that the detection performance of the decentralized signal detection with an unknown parameter of unknown distribution and non-random unknown parameter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a relaxation-based autofocus (AUTORELAX) algorithm that can be used to compensate for the aperture errors in curvilinear synthetic aperture radar (CLSAR) and to extract three-dimensional target features. The data model for the autofocus problem in CLSAR is presented. Experimental and simulation results show that AUTORELAX can be used to significantly improve the estimation accuracy of the target parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The majority of Direction-of-Arrival (DOA) estimation methods studied in the literature work effectively in relatively strong signal power environment [positive dB of Array- Signal-to-Noise-Ratio (ASNR)]. In weak power signal environments, conventional beamformer-based and subspace-based methods fail to estimate the DOA correctly. The MaxMax method allows to maintain accurate estimates of the DOA even in extremely noisy environments (-10 dB of ASNR). The method is reviewed and its performance is compared with that of the Conventional Beamformer, Capon's Beamformer, MUSIC, ESPRIT, and Min-Norm methods. In contrast with the subspace-based methods which entirely depend on the full rank signal covariance matrix, the MaxMax method does not. Hence, the performance of the method remains superior to that of the others without adjusting the algorithm to the characteristics of source signals such as multipath or singlepath. If the signal power is so weak that its presence is almost negligible, Akaike's Information Criterion (AIC) or Minimum Description Length (MDL) do not yield correct estimates the number of signal paths. A new 'spatial sampling' technique and its performance are presented for estimating the number of signals in case of strongly suppressed signal power.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The harmogram is calculated from the power spectral density and an estimate of background noise. It is a computationally effective means of analyzing a signal for constituent periodicities. For one-dimensional signals it can (1) provide a means to determine the likelihood that a periodic component is present and (2) determine its principal frequency and related harmonics. The mathematical foundations of the harmogram have recently been extended to two-dimensional signals where it provides interesting insight into images (e.g., texture and composition). In either case, the harmogram produces a much reduced invariant feature set which is useful as a preprocess to classification. This presentation details the harmogram process and illustrates its application with several examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fast algorithm of realizing a method of inverting a long liner convolution is presented. It is based on the procedure of sectionalization combined with effective real-valued split- radix fast Fourier transformation (FFT) algorithm for solving problems of restoration digital signals (images). The minimal multiplicative complexity of such algorithm is obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The dynamic object state recognition method for increasing reliability of visual and quantitative analyses of local phase portrait features is proposed. The method has the raised sensitivity to local changes of an initial signal and sufficient stability to action of measuring noises by means of using the integrated characteristics of analytical signal. The results of proposed method modeling for an electrocardiosignal are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Blind source separation (BSS) has received increased attention in the signal processing literature. the goal of blind source separation is signal recovery from an unknown channel through the maximization (or minimization) of some independence criterion. In our previous work, we derived a generalized criterion (simultaneous diagonalization of correlation matrices -- SDOC) for blind source separation and explored the time-frequency structure of nonstationary signals like speech. In this paper we analyze first the identifiability of sources and apply subband filters for feature extraction to improve the BSS performance of the SDOC algorithm in the realistic but difficult situation when the background noise is not negligible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Focal Plane Array (FPA) Radar is being developed to image objects and personnel in enclosed areas. An FPA radar continuously receives energy from every angular resolution cell in the Field-of-View. Thus, an optimal signal processor must process, in real time, a large number of simultaneous channels, 216 in the current configuration. A DSP-based processor has been developed to achieve this goal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Acoustic Signal Development Platform (ASDP) was designed so that mature and experimental signal processing algorithms can be rapidly tested on real target hardware. Changing the number of processors, sensors, and configurations is accomplished through the use of drop-in modules and simple software changes. Open VME architecture and Texas Instruments TIM processor module standards allows for easy changes and upgrades to accommodate changing sensor requirements. In addition to a built in debugger, the interface between the processor modules and the host PC is accomplished using standard C function calls directly in the processor module software. This allows the processors to communicate directly with the host PC for disk storage, data analysis, and GUI displays. Thus, the ASDP allows the full development of a sensor package on real hardware before any custom system is built. Although the system was designed for acoustic signal processing, it is fully capable of performing other sensor processing tasks, such as RF and image processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a novel digital signal processing algorithm for adaptively detecting and identifying signals buried in noise. The algorithm continually computes and updates the long-term statistics and spectral characteristics of the background noise. Using this noise model, a set of adaptive thresholds and matched digital filters are implemented to enhance and detect signals that are buried in the noise. The algorithm furthermore automatically suppresses coherent noise sources and adapts to time-varying signal conditions. Signal detection is performed in both the time- domain and the frequency-domain, thereby permitting the detection of both broad-band transients and narrow-band signals. The detection algorithm also provides for the computation of important signal features such as amplitude, timing, and phase information. Signal identification is achieved through a combination of frequency-domain template matching and spectral peak picking. The algorithm described herein is well suited for real-time implementation on digital signal processing hardware. This paper presents the theory of the adaptive algorithm, provides an algorithmic block diagram, and demonstrate its implementation and performance with real- world data. The computational efficiency of the algorithm is demonstrated through benchmarks on specific DSP hardware. The applications for this algorithm, which range from vibration analysis to real-time image processing, are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper shows on how the real algorithms for the reduction a modulo arbitrary polynomial and fast Vandermonde transform (FVT) are realized on computer using fast Fourier transform (FFT). This real-valued FVT algorithm on the developed fast reduction polynomial algorithm is based. The realization of FVT algorithm on computer with real multiplicative complexity O(2Nlog22N) and real additive complexity O(6Nlog22N) is obtained. New FVT algorithm is applied in digital signal, filtering and interpolation problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Tracking, and Resource Management
This paper presents the basic requirements for a simulation of the main capabilities of a shipborne MultiFunction Radar (MFR) that can be used in conjunction with other sensor simulations in scenarios for studying Multi Sensor Data Fusion (MSDF) systems. This simulation is being used to support an ongoing joint effort (Canada - The Netherlands) in the development of MSDF testbeds. This joint effort is referred as Joint-FACET (Fusion Algorithms & Concepts Exploration Testbed), a highly modular and flexible series of applications that is capable of processing both real and synthetic input data. The question raised here is how realistic should the sensor simulations be to trust the MSDF performance assessment? A partial answer to this question is that at least, the dominant perturbing effects on sensor detection (true or false) are sufficiently represented. Following this philosophy, the MFR model, presented here, takes into account sensor's design parameters and external environmental effects such as clutter, propagation and jamming. Previous radar simulations capture most of these dominant effects. In this paper the emphasis is on an MFR scheduler which is the key element that needs to be added to the previous simulations to represent the MFR capability to search and track a large number of targets and at the same time support a large number of (semi-active) surface-to-air missiles (SAM) for the engagement of multiple hostile targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recognition of targets incoming the surroundings of a ship and detected by an InfraRed Search and Track (IRST) system, is made difficult by the low signal to noise ratio of the data. It results from the requirement to classify targets which are still far enough to permit combat system activation if a threat is identified. Thus, exploiting as much information as available is necessary to increase the robustness of the classification performances. But the combination of multiple information sources leads to an issue of heterogeneous data fusion. Moreover, a consequence of using a passive system is that the range from an unknown target can't be assessed easily, and therefore nor his trajectory. In such a configuration, it's difficult to figure out from which aspect the target is seen, which makes the observed features much less discriminating. This paper describes a new processing architecture which aims at overcoming this difficulty by evaluating, in the frame of the Dempster-Shafer (DS) theory, the likelihood of compound hypothesis consisting of a target class and an aspect angle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Tracking, and Resource Management
The problem of distributed detection of a signal in incompletely specified noise is considered. It is assumed that the noise can be modeled as a zero-mean Gaussian random process with unknown slowly varying covariance matrix. A network of m sensors receiving independent and identically distributed observations in Rp, regarding certain binary hypotheses, pass their decisions to a fusion center which then decides which one of the two hypotheses is true. We consider the situation where each sensor employs a generalized maximum likelihood ratio test with its own observations and a threshold, which is the same for all the sensors. This test is invariant to intensity changes in the noise background and achieves a fixed probability of a false alarm. Thus, operating in accordance to the local noise situation, the test is adaptive. In addition, it is shown that the test is UMPI (uniformly most powerful invariant). The fusion center decision is based on k out of m decision rule. The asymptotic (m yields (infinity) ) behavior of k out of m rules for finite k and finite m-k are considered. For these rules, the error probability of making a wrong decision does not tend to zero as m yields (infinity) , unless the probability distributions under the hypotheses satisfy certain conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A real time (30 frame) two color sensor fusion method utilizes a dichroic mirror to separate incident radiation into 3 - 5 and 8 - 12 micrometer wavebands. The overlapped thermal images created by the dichroic mirror and optical path difference are shifted and made to coincide together directly on the focal plane. This is the basis for developing an optical image coincide technology which can replace the conventional circuit of time delay and integration (TDI). It greatly reduces the deterioration caused by TDI circuits and results in a better image quality. In the experiment, it was found that when several different bandpass filters are inserted into the optical paths, there exists an optimized fusion relationship among the incident power, waveband, detector responsivity and preamplifier gain. If an appropriate adjustment is made the signal to noise ratio of the combined thermal image can be improved a factor above of 30% or more. The performance of the fusion thermal image is better than that of the single waveband, i.e. signal to noise, (SNR), minimum resolvable temperature (MRTD), etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Tracking, and Resource Management
This paper presents an original neural network based solution to the heterogeneous radar track fusion problem. The neural network is used to decide which tracks issued from two distinct sensors correspond to the same target. Classical fusion methods, based on distance criteria or the Chi-square test, can only be used when the sensors are of the same type, i.e. when they provide the same type of information, and the measurement vector is of the same dimension. When that is not the case, these criteria are applied only on the information that are common to the sensors, resulting in a lost of informations. Our neural approach, based on the use of a Kohonen map, allows to compare heterogenous tracks, without such a lost of informations. A neural network associated with a given sensor, maps each track on a two dimensional Kohonen grid. Each neuron encodes a monosensor track; the neuron inputs are defined as the latest estimated positions of the track. At convergence, the fusion of two tracks is decided depending on the position of each monosensor track on the grids: the best matching of two neural maps is defined in such a way that the distance between two projected tracks (of two different sensors) is minimized. This matching problem is similar to the well-known assignment problem which can also be solved by means of a neural network. Some simulation results are presented, using two dimensional and three dimensional radar tracks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Considered one of two the most wide-spread approaches to the problems of weak signals after detector location, which consists in use so named most powerful criterions. Pointed direction to date remains to be poorly known. The aim of this work is synthesis of locally most powerful criterions of weak signals after detector location for nonGaussian noise models in conditions some types of a priori uncertainty.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In pixel-level image sequence fusion, a composite image sequence has to be built of several spatially registered input image sequences. One of the primary goals in image sequence fusion is the temporal stability and consistency of the fused image sequence. To fulfill the preceding desiderata, we propose a novel approach based on a shift invariant extension of the 2D discrete wavelet transform, which yields an overcomplete and thus shift invariant multiresolution signal representation. The advantage of the shift invariant fusion method is the improved temporal stability and consistency of the fused sequence, compared to other multiresolution fusion methods. To evaluate temporal stability and consistency of the fused sequence we introduce a quality measure based on the mutual information between the inter-frame-differences (IFD) of the input sequences and the fused image sequence. If the mutual information is high, the information in the IFD of the fused sequence is low with respect to the information present in the IFDs of the input sequences, indicating a stable and consistent fused image sequence. We evaluate the performance of several multiresolution fusion schemes on a real word image sequence pair and show that the shift invariant fusion method outperforms the other multiresolution fusion methods with respect to temporal stability and consistency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper the attempt to work out the model of the satellite based earthquake precursors recognition system is presented. The system relies on the satellite based multichannel very low frequency (VLF) radiospectrograph. The comparison of simultaneously observed by spectrograph signals of different nature and properties is presented. Based on available VLF precursor data, the model of earthquake recognition system is worked out. The model is grounded on multivariate Gaussian estimator for signs generated by the multichannel spectrograph. The computer simulation of the model is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Tracking, and Resource Management
An obviously important aspect of target tracking, and more generally, data fusion, is the combination of those pieces of multi-source information deemed to belong together. Recently, it has been pointed out that a random set approach to target tracking and data fusion may be more appropriate rather than the standard point-vector estimate approach -- especially in the case of large inherent parameter errors. In addition, since many data fusion problems involve non-numerical linguistic descriptions, in the same spirit it is also desirable to be able to have a method which averages in some qualitative sense random sets which are non-numerically- valued, i.e., which take on propositions or events, such as 'the target appears in area A or C, given the weather conditions of yesterday and source 1' and 'the target appears in area A or B, given the weather conditions of today and source 2.' This leads to the fundamental problem of how best to define the expectation of a random set. To date, this open issue has only been considered for numerically-based random sets. This paper addresses this issue in part by proposing an approach which is actually algebraically-based, but also applicable to numerical-based random sets, and directly related to both the Frechet and the Aumann-Artstein-Vitale random set averaging procedures. The technique employs the concept of 'constant probability events,' which has also played a key role in the recent development of 'relational event algebra,' a new mathematical tool for representing various models in the form of various functions of probabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.