Recent technological evolutions and developments allow gathering huge amounts of data stemmed from different types of sensors, social networks, intelligence reports, distributed databases, etc. Data quantity and heterogeneity imposed the evolution necessity of the information systems. Nowadays the information systems are based on complex information processing techniques at multiple processing stages. Unfortunately, possessing large quantities of data and being able to implement complex algorithms do not guarantee that the extracted information will be of good quality. The decision-makers need good quality information in the process of decision-making. We insist that for a decision-maker the information and the information quality, viewed as a meta-information, are of great importance. A system not proposing to its user the information quality is in danger of not being correctly used or in more dramatic cases not to be used at all. In literature, especially in organizations management and in information retrieval, can be found some information quality evaluation methodologies. But none of these do not allow the information quality evaluation in complex and changing environments. We propose a new information quality methodology capable of estimating the information quality dynamically with data changes and/or with the information system inner changes. Our methodology is able to instantaneously update the system's output quality. For capturing the information quality changes through the system, we introduce the notion of quality transfer function. It is equivalent to the signal processing transfer function but working on the quality level. The quality transfer function describes the influence of a processing module over the information quality. We also present two different views over the notion of information quality: a global one, characterizing the entire system and a local one, for each processing module.
KEYWORDS: Sensors, Target detection, Antennas, Radar, Signal to noise ratio, Monte Carlo methods, Signal processing, Filtering (signal processing), Doppler effect, Analytical research
Space-Time Adaptive Processing (STAP) is a well known technique used for dealing with clutter in order to detect
moving targets. This technique was derived under assumption, that clutter has Gaussian characteristics.
Unfortunately when dealing with sea clutter, Gaussian assumption is no longer valid [1]. This causes increased
number of false alarms. In this paper we present improved detector to deal with non-Gaussian clutter. Detector was
derived from Generalized Likelihood Ratio Test (GLRT), assuming Spherically Invariant Random Process (SIRP) as
a model for the clutter. Resulting detector was named Two Dirac-Deltas (TDD) detector and it has additional
parameter (Δ) in comparison to classical STAP. Based on simulations, it is shown that it is crucial to choose Δ
parameter appropriately.
In sonar imaging for seafloor remote sensing, research activities are more and more oriented on the use of data fusion approaches. Nowadays, it is well established that using sonar images, the Digital Elevation Maps (DEMs), can be generated by exploiting either the amplitude information or the phase information of the acoustic signal. In this paper, the main interest consists on the generation of a complete Digital Elevation Map (DEM) by the use of a data fusion approach of two existing DEMs issued from two different techniques. The aim of the proposed approach is to elaborate a general interpretation system that coherently links works on data selection and fusion leading to improve DEMs generation and to exploit it in the seafloor remote sensing applications (particularly for the inhomogeneous scenes with a variety terrain). In this paper, shape from shading and the interferometry techniques are considered. Then, the manner of the DEMs fusion proposed, has been based on fuzzy logic and some fuzzy propositions, which defined using experts a priori knowledge source. This promising idea enables information to be managed through the consideration of the imprecision and ambiguity information and the benefit provide by the injection of the a priori knowledge in the decision taken system.
KEYWORDS: Fuzzy logic, Radar, Data modeling, Classification systems, Information fusion, Fuzzy systems, Target detection, Statistical modeling, Databases, Control systems
The decision making systems make use of heterogeneous information to identify an object class or a target, which are affected by various kinds of imperfection. First, information issued from measures (radar measures, images) of an observation is represented by X variables. Generally, on these X variables, each class can be described through a probability distribution function. These decision systems also integrate expert a prior knowledge to assist the decision. Such information is defined by Y variables and is represented by fuzzy membership function. The question is how to combine appropriately these two kinds of data in order to improve the decision process.
In this paper, we present a decision model combining probabilistic and fuzzy data. The decision is defined using a fuzzy Bayesian approach, which takes into account these two imperfections. Only two classes are considered using one X variable and one Y variable. Then an extension is proposed to more complicated cases.
To validate the interest of this approach, we compare it with the Bayesian classification and fuzzy classification applied separately to synthetic data. In addition, we will see how our approach can be applied to the problem of radar system ranking, on which system resources are limited and as a consequence, decisions about priorities must be taken. Using the system information sources (i.e. probabilistic: radar measurements, fuzzy: prior expert knowledge, evidential), a comparison between Bayesian classification, fuzzy classification, system decision and the proposed approach is presented.
In the field of pattern recognition from satellite images, the existing road extraction methods have been either too specialized or too time consuming. The challenge then has been to develop a general and close to real time road extraction method. This study falls in this perspective and aims at developing a close to real time semi-automatic system able to extract linear planimetric features (including roads). The major concern of this study is to combine the most efficient tools to deal with the road primitive extraction process in order to handle multi- resolution and multi-type raw images. Hence, this study brought along a new model fusion characterized by the combination of operator input points (in 2D or 3D), fuzzy image filtering, cubic natural splines and the A*algorithm. First, a cubic natural splines interpolation of the operator points is used to parameterize the A*algorithm. Cost function with the consequence to restrict the mining research area. Second, the heuristic function of the same algorithm is combined with the fuzzy filtering which proves to be a fast and efficient tool for selection of the primitive most promising points. The combination of the cost function and the heuristic function leads to a limited number of hypothetical paths, hence decreasing the computation time. Moreover, the combination of the A*algorithm and the splines leads to a new way to solve the perceptual grouping problems. Results related to the problem of feature discontinuity suggest new research perspectives in relation to noisy area (urban) as well as noisy data (radar images).
The main topic of this study concerns edge detection using information fusion approaches. Edge detection methods are based on first and second order local operations followed by a thresholding and edge tracking techniques. In this study, an intermediate fuzzy-evidential conceptual level is introduced between the gray level and edge detection symbolic information level. From the image, evidences concerning edges and regions are extracted using fuzzy membership functions as well as contextual information. The proposed approach can be decomposed into two steps: (1) application of evidential reasoning approach in order to compute a basic masse function, (2) edge detection process based on the use of an iterative algorithm, exploiting the contextual information and a belief masse function. Masse function computation is based on the use of edge and region fuzzy membership functions of each pixel in the analyzed scene. The main interest of this step is to consider membership functions as being observed evidences instead of image gray level values. The key idea of the second step is to use all the information about regions, edges and contextual data in the edge extraction process. Obtained results are encouraging and the proposed methodology is shown to be robust to different noisy environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.