PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The Advanced Distributed Simulation (ADS) Synthetic Environments Program seeks to create robust virtual worlds from operational terrain and environmental data sources of sufficient fidelity and currency to interact with the real world. While some applications can be met by direct exploitation of standard digital terrain data, more demanding applications -- particularly those support operations 'close to the ground' -- are well-served by emerging capabilities for 'value-adding' by the user working with controlled imagery. For users to rigorously refine and exploit controlled imagery within functionally different workstations they must have a shared framework to allow interoperability within and between these environments in terms of passing image and object coordinates and other information using a variety of validated sensor models. The Synthetic Environments Program is now being expanded to address rapid construction of virtual worlds with research initiatives in digital mapping, softcopy workstations, and cartographic image understanding. The Synthetic Environments Program is also participating in a joint initiative for a sensor model applications programer's interface (API) to ensure that a common controlled imagery exploitation framework is available to all researchers, developers and users. This presentation provides an introduction to ADS and the associated requirements for synthetic environments to support synthetic theaters of war. It provides a technical rationale for exploring applications of image understanding technology to automated cartography in support of ADS and related programs benefitting from automated analysis of mapping, earth resources and reconnaissance imagery. And it provides an overview and status of the joint initiative for a sensor model API.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modeling and simulation are powerful problem solving techniques because they allow experimentation that would otherwise be cumbersome or impossible. However, care must be taken in executing the techniques to ensure credibility of results. This paper identifies critical elements of seven major steps in the modeling and simulation process that, when applied carefully, greatly enhance the likelihood of success.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the design of an image database system for remotely sensed imagery. The system stores and serves level 1B remotely sensed data, providing users with a flexible and efficient means for specifying and obtaining image-like products on either a global or a local scale. We have developed both parallel and sequential versions of the system; the parallel version uses the CHAOS++ and Jovian libraries, developed at the University of Maryland as part of an NSF grand challenge project, to support parallel object oriented programming and parallel I/O, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a system for rapid and convenient video data acquisition and 3-D numerical coordinate data calculation able to provide precise 3-D topographical maps and 3-D archival data sufficient to reconstruct a 3-D virtual reality display of a crime scene or mass disaster area. Under a joint U.S. army/U.S. Air Force project with collateral U.S. Navy support, to create a 3-D surgical robotic inspection device -- a mobile, multi-sensor robotic surgical assistant to aid the surgeon in diagnosis, continual surveillance of patient condition, and robotic surgical telemedicine of combat casualties -- the technology is being perfected for remote, non-destructive, quantitative 3-D mapping of objects of varied sizes. This technology is being advanced with hyper-speed parallel video technology and compact, very fast laser electro-optics, such that the acquisition of 3-D surface map data will shortly be acquired within the time frame of conventional 2-D video. With simple field-capable calibration, and mobile or portable platforms, the crime scene investigator could set up and survey the entire crime scene, or portions of it at high resolution, with almost the simplicity and speed of video or still photography. The survey apparatus would record relative position, location, and instantly archive thousands of artifacts at the site with 3-D data points capable of creating unbiased virtual reality reconstructions, or actual physical replicas, for the investigators, prosecutors, and jury.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geographic information systems (GISs) represent a technology that is being used by many disciplines throughout the world. GIS is currently being used in many security related applications for route planning, location of key facilities, and as an index into higher detail data sets such as building plans. The coming of the Olympic Games to the Atlanta area in 1996 gives us a chance to show the practical uses of GIS in security applications. This paper investigates the potential for GIS and visualization techniques to be used in a hostage rescue situation and in planning for the placement of surveillance cameras within the Georgia Tech campus, the 10996 Olympic Village.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The application of recent computer technology of the National Center for Missing and Exploited Children (NCMEC) has provided the means to age progress faces of long term missing children. In the thousands of cases of missing children that have disappeared for two or more years, there is a particular priority to identify and recover these children. It is apparent that long term solutions to this problem lie in the realm of technology. One of the areas is the computerized aging of children's faces. Forensic artists working with this new technology help this goal become a reality. When imaging a child's face, the forensic artist must consider using photographs of the biological family at an age consistent with the age of the missing child. With these pictures, a reasonable likeness can be produced using computer technology. This image can aid law enforcement, child find and social service agencies and the public in their search for the missing child. Unique features of the system provide for the stretching, merging, pixelation and refining of a completed progression. A knowledge of the steps of facial growth and anatomy is necessary to achieve an accurate image. Future developments in age progression and facial reconstruction may be in the realm of morphing technology. Application of this technology is being tested to provide a more accurate image for investigative use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of image processing is becoming increasingly important in the evaluation of violent crime. While much work has been done in the use of these techniques for forensic purposes outside of forensic pathology, its use in the pathologic examination of wounding has been limited. We are investigating the use of image processing and three-dimensional visualization in the analysis of patterned injuries and tissue damage. While image processing will never replace classical understanding and interpretation of how injuries develop and evolve, it can be a useful tool in helping an observer notice features in an image, may help provide correlation of surface to deep tissue injury, and provide a mechanism for the development of a metric for analyzing how likely it may be that a given object may have caused a given wound. We are also exploring methods of acquiring three-dimensional data for such measurements, which is the subject of a second paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The RADIUS system, a two-phase five-year project, is aimed at increasing imagery analyst (IA) productivity, and improving the quality and timeliness of IA products. A key feature of RADIUS is model-supported exploitation (MSE) in which two-dimensional and three- dimensional models of a site are used as the basis of much of the subsequent analysis. Image understanding techniques figure strongly in this analysis. This paper describes the challenges being faced in the three-year Phase II RADIUS Testbed System (RTS) development. Plans for the development, testing, evaluation, and incremental enhancement of the RTS are described, as the project attempts to define and meet the real-world needs of imagery analysis and photo interpretation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ARPA is currently sponsoring five institutions to perform research related to RADIUS. The efforts are primarily addressing the problems of semi- and fully automatic site model construction and change detection. Brief descriptions of the work at each institution are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The RADIUS Common Development Environment pulls together many diverse functions into an integrated whole. The main goal of the environment is to provide a system to do interactive modeling of 3-dimensional scenes from multiple images, as well as, providing an infrastructure to support the research in and implementation of image understanding-based algorithms for this and other tasks. The RCDE contains facilities for: CAD-system-like 3D modeling; image processing; electronic-light-table image viewing and exploitation; frame and non-frame camera photogrammetry; and photo realistic rendering. The major achievement of the system is the high level of integration and interoperability between and among these facilities. The key realization that enables this is that every entity represented in the RCDE has an associated local coordinate system. This includes cartographic and cultural features, images and sub-images, text annotations, graphical user interface elements, photogrammetric conjugate points and even the earth itself. These entities are tied together through a flexible and efficient network of coordinate transformations. This allows each type of data to be represented, manipulated, and displayed in the most convenient and precise form, without sacrificing functionality or generality, in addition to enabling the fusion of different types of geometric data. In this paper, we explain the coordinate system representations and transformation facilities in the RCDE and outline some of the rationale and strategies behind the current design and implementation. Also included are examples drawn from its use in the government sponsored RADIUS program.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Model-based optimization (MBO) is a paradigm in which an objective function is used to express both geometric and photometric constraints on features of interest. A parametric model of a feature (such as a road, a building, or coastline) is extracted from one or more images by adjusting the model's state variables until a minimum value of the objective function is obtained. The optimization procedure yields a description that simultaneously satisfies (or nearly satisfies) all constraints, and, as a result, is likely to be a good model of the feature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of automatic change detection in aerial imagery has attracted considerable research effort. A significant advance in achieving reliable change detection is offered by the application of spatial context, derived from a 3D site model. The use of 3D model context is a key approach of the RADIUS program, a 5 year ARPA project to develop tools to assist an image analyst in extracting intelligence information from aerial images. This paper introduces the concept of an observation event which provides a uniform mechanism for coupling the linguistic framework of intelligence concepts to image observations and associated image feature extraction and analysis processes. A blackboard-style processing architecture is being developed to compute the state of observation events using the spatial context of a 3D site model. The observation event class hierarchy is described along with experimental results of event computation in aerial images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We address the application of model-supported exploitation techniques to synthetic aperture radar (SAR) imagery. The emphasis is on monitoring SAR imagery using wide area 2D and/or 3D site models along with contextual information. We consider here the following tasks useful in monitoring: (a) site model construction using segmentation and labeling techniques, (b) target detection, (c) target classification and indexing, and (d) SAR image-site model registration. The 2-D wide area site models used here for SAR image exploitation differ from typical site models developed for RADIUS applications, in that they do not model specific facilities, but constitute wide area site models of cultural features such as urban clutter areas, roads, clearings, fields, etc. These models may be derived directly from existing site models, possibly constructed from electro-optical (EO) observations. When such models are not available, a set of segmentation and labeling techniques described here can be used for the construction of 2D site models. The use of models can potentially yield critical information which can disambiguate target signatures in SAR images. We address registration of SAR and EO images to a common site model. Specific derivations are given for the case of registration within the RCDE platform. We suggest a constant false alarm rate (CFAR) detection scheme and a topographic primal sketch (TPS) based classification scheme for monitoring target occurrences in SAR images. The TPS of an observed target is matched against candidate targets TPSs synthesized for the preferred target orientation, inferred from context (e.g. road or parking lot targets). Experimental results on real and synthetic SAR images are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Research on the formulation of invariant features for model-based object recognition has mostly been concerned with geometric constructs either of the object or in the imaging process. We describe a new method that identifies invariant features computed from long wave infrared (LWIR) imagery. These features are called thermophysical invariants and depend primarily on the material composition of the object. Features are defined that are functions of only the thermophysical properties of the imaged materials. A physics-based model is derived from the principle of conservation of energy applied at the surface of the imaged regions. A linear form of the model is used to derive features that remain constant despite changes in scene parameters/driving conditions. Simulated and real imagery, as well as ground truth thermo-couple measurements were used to test the behavior of such features. A method of change detection in outdoor scenes is investigated. The invariants are used to detect when a hypothesized material no longer exists at a given location. For example, one can detect when a patch of clay/gravel has been replaced with concrete at a given site. This formulation yields promising results, but it can produce large values outside a normally small range. Therefore, we adopt a new feature classification algorithm based on the theories of symmetric alpha- stable (S(alpha) S) distributions. We show that symmetric, alpha-stable distributions model the thermophysical invariant data much better than the Gaussian model and suggest a classifier with superior performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed and evaluated a tool for change detection and other analysis tasks relevant to image exploitation. The tool, visGRAIL, integrates three key elements: (1) the use of multiple algorithms to extract information from images -- feature extractors or 'sensors,' (2) an algorithm to fuse the information -- presently a neural network, and (3) empirical estimation of the fusion parameters based on a representative set of images. The system was applied to test images in the RADIUS common development environment (RCDE). In a task designed to distinguish natural scenes from those containing various amounts of human-made objects and structure, the system classified correctly 95% of 350 images in a test set. This paper describes details of the feature extractors, and presents analyses of the discriminatory characteristics of the features. visGRAIL has been integrated into the RCDE.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the primary functions of the research and development in image understanding systems (RADIUS) testbed is to support the development and use of image understanding (IU) technologies in a model-supported exploitation (MSE) workstation environment. This paper describes a suite of storage capabilities added to the RADIUS testbed as part of the foundation providing this support. We discuss the storage requirements of IU processes, and describe a database solution to satisfy them. We present our design, which addresses the issues of how to represent the data, what precisely to store and how to retrieve the stored data. This is followed by a critique of the design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new technique for finding object pose in 2D is presented. Given an object model and an image of the object, the algorithm uses a hierarchical approach to quickly prune out areas of the image that do not contain the object. At higher resolutions hypothesized object positions are refined and pruned according to a match score based on the current resolution. Once the best position hypotheses are obtained at maximum resolution, a second image of the object is used to pinpoint its 3D position. The system was used to correct the positions of hundreds of 3D object models of buildings in outdoor scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this report, we describe a novel method to automatically segment several kinds of cells in breast cancer pathology images. The information on the number of cells is expected to assist pathologists in consistent diagnosis of breast cancer. Currently, most pathologists make a diagnosis based on a rough estimation of the number of cells on an image. Because of the rough estimation, the diagnosis is not objective. To assist pathologists to make a consistent, objective and fast diagnosis, it is necessary to develop a computer system to automatically recognize and count several kinds of cells. As the first step for this purpose, we propose a novel neural network model, called an adaptive-sized hybrid neural network (ASH-NN), and develop a method based on this network model to segment cells from breast cancer pathology images. The proposed neural network consists of three layers and the connection weights between the first and second layers are updated by self-organization, and the weights between the second and third layers are determined based on supervised learning. The ASH-NN has the capability of (1) automatic adjustment of the number of hidden units and (2) quick learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An important problem in image analysis is finding small objects in large images. The problem is challenging because (1) searching a large image is computationally expensive, and (2) small targets (on the order of a few pixels in size) have relatively few distinctive features which enable them to be distinguished from non-targets. To overcome these challenges we have developed a hierarchical neural network (HNN) architecture which combines multi-resolution pyramid processing with neural networks. The advantages of the architecture are: (1) both neural network training and testing can be done efficiently through coarse-to-fine techniques, and (2) such a system is capable of learning low-resolution contextual information to facilitate the detection of small target objects. We have applied this neural network architecture to two problems in which contextual information appears to be important for detecting small targets. The first problem is one of automatic target recognition (ATR), specifically the problem of detecting buildings in aerial photographs. The second problem focuses on a medical application, namely searching mammograms for microcalcifications, which are cues for breast cancer. Receiver operating characteristic (ROC) analysis suggests that the hierarchical architecture improves the detection accuracy for both the ATR and microcalcification detection problems, reducing false positive rates by a significant factor. In addition, we have examined the hidden units at various levels of the processing hierarchy and found what appears to be representations of road location (for the ATR example) and ductal/vasculature location (for mammography), both of which are in agreement with the contextual information used by humans to find these classes of targets. We conclude that this hierarchical neural network architecture is able to automatically extract contextual information in imagery and utilize it for target detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present 3D registration techniques for detecting breast cancer lesions based on differences between the high spatial resolution 'pre' and 'post' contrast administration MR breast scans. We also present registration techniques for detecting lesions based on the time course of enhancement after administration of the contrast agent based on changes in the low spatial resolution, high temporal resolution, dynamic MR data taken during the absorption of the contrast agent. Alignment of the 'pre' and 'post' MR data and of the dynamic MR images is done by a direct optimization technique to estimate a global affine deformation model using a coarse-to-fine control strategy over a 3D pyramid. This global model is followed by a one iteration flow estimation to account for any local non-rigidity. We present results of the registration process and visualization of the different volumes obtained after alignment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Highly detailed polygonal surface data can prove difficult to handle and display in a real-time visualization/simulation environment. Level-of-detail techniques are still necessary with today's graphics workstations. An implementation of a quadtree-like recursive subdivision algorithm that divides a rectangular area into sixteen (four by four) areas is described. Continuity of a resulting polygonal mesh is ensured by a traversal step that explicitly joins the edges of regions of different resolution. Similar continuity is also ensured for color and normal vectors associated with each elevation grid point. The result is an unbroken surface with no appreciable discontinuities in color or shading, but increased detail in certain areas selectable by the user (such as target areas, mountains, or areas with less elevation change). This method is specialized for rectangular grid elevation data, but is fast enough to be performed dynamically based on distance of the observer from the terrain. The use of a hextree allows greater decimation for a given number of resolution levels compared to a quadtree, and minimizes the overhead associated with changing resolution levels during traversal for display. Examples shown include a virtual reality fly-over of terrain with a small proportion of mountains (at original full resolution) but a polygon count reduction of 98% overall, yielding more than an order of magnitude frame rate increase.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a virtual reality (VR) system developed for use as part of an integrated, low-cost, stand-alone, multimedia trainer. The trainer is used to train National Guard personnel in maintenance and trouble-shooting tasks for the M1A1 Abrams tank, the M2A2 Bradley fighting vehicle and the TOW II missile system. The VR system features a modular, extensible, object-oriented design which consists of a training monitor component, a VR run time component, a model loader component, and a set of domain-specific object behaviors which mimic the behavior of objects encountered in the actual vehicles. The VR system is built from a combination of off-the-shelf commercial software and custom software developed at RTI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Simulation Networking (SIMNET) developed a set of paradigms for distributed interactive simulation (DIS) architectures. Designed for computer systems in the mid 1980s, they have proven to be quite successful for the exercises they supported. But as technology advances, the capability and desire to increase the size and scope of the exercises continues to grow. In order to do this, some of the basic tenets of the architecture need to be redesigned. In this paper we take a look at three of the fundamental limitations affecting scalability: networks, environmental databases, and processing power.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.