Proceedings Article | 16 September 1994
KEYWORDS: Image segmentation, Visualization, Image fusion, Quality measurement, Visual analytics, Sodium, Feature extraction, Image quality, Niobium, Monte Carlo methods
We address the problem of the display of vector valued images, also known as vector fields or multiparameter images, in which a vector of data, rather than a scalar, is associated with each pixel of a pixel grid. Each component of the vector field defines a gray-scale image on the pixel grid. Vector fields usually arise when more than one physical property (henceforth an attribute) is measured for the object being imaged. Mapping the measured values of any one attribute to gray-scale on a pixel grid defines an image. The collection of these images then defines a vector field, with a vector of attributes, corresponding to each pixel. The object usually consists of several disjoint regions, each made up of a region type, which we shall call a class. A set of attributes separates the classes (and therefore also the regions) if, for any two different classes, there exists an attribute in the set that has distinct values for the two. We refer to a pixel grid where the pixels are labeled as belonging to different regions as an Underlying Image (UI) . Vector fields arise naturally in any application that involves the display of multiparameter data, or multisensor fusion, as surveyed in [1], the classical example being multispectral satellite images. Once we have a vector field of an object consisting of measurements of a set of attributes that separate all of the classes in the object, we have the potential to distinguish between all of its regions. Regardless of the ultimate goal of the specific imaging application, most applications share a common intermediate goal, which is the one we address: to present a specialist human observer with an image that will maximize his chances to classify correctly the image pixels into different regions, that is, segment the image. Unfortunately, the human observer is inefficient at assimilating and intergrating multidimensional data, and traditional methods like the parallel display of the component images are often misleading, as demonstrated by an example in [1]. Another technique for vector field visualization is automatic image segmentation, which suffers from a serious drawback. Our premise is that the specialist human observer should be able to bring to bear all his prior knowledge and experience, which are difficult to capture in a computer program, on the final analysis process. This specialized prior knowledge could include information about the spatial structure of the different regions in the image, the contrast levels between the various regions and the within-region variability. An automatically segmented image would deprive the specialist of a chance to apply this knowledge, and result in inferior performance in many situations. Instead, we have recently proposed [1] an approach involving the fusion of the vector field into a single most informative gray-scale image. This is done by finding linear combinations of the component images of vector field that yield images with high discriminating power. (See also [2, 3]). We note that, although the problem is closely related to the classical problem of optimum feature extraction, our approach is non-classical. It explicitly takes into account the spatial structure of the data. This paper concentrates on the quantitative analysis and objective performance evaluation of the proposed algorithms.