PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
SBRC is at the forefront of industry in developing IR focal plane arrays including multi-spectral technology and '3rd generation' functions that mimic the human eye. 3rd generation devices conduct advanced processing on or near the FPA that serve to reduce bandwidth while performing needed functions such as automatic target recognition, uniformity correction and dynamic range enhancement. These devices represent a solution for processing the exorbitantly high bandwidth coming off large area FPAs without sacrificing systems sensitivity. SBRC's two-color approach leverages the company's HgCdTe technology to provide simultaneous multiband coverage, from short through long wave IR, with near theoretical performance. IR systems that are sensitive to different spectral bands achieve enhanced capabilities for target identification and advanced discrimination. This paper will provide a summary of the issues, the technology and the benefits of SBRC's third generation smart and two-color FPAs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have built an all-electronic spectro-polarimetric imaging camera utilizing an acousto-optic tunable filter and a liquid crystal variable retardation plate. This combination of rapidly adjustable parameters allows operations at 30/sec. frame rate, and near real time adaptability to changing target signatures. The spectral capability of the AOTF permits us to apply simultaneous, multiple wavelength filtering which greatly increases selectivity. Electronically agile polarization analysis adds a valuable signature feature for many scenarios. The adjustable retardation gives the capability to analyze and display not only linear polarization, but more generally, elliptical polarization as well. We have developed background suppression algorithms based on spectral and polarization signatures so that a wide variety of targets may be displayed with greatly enhanced contrast.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Smart photodetector arrays (SPAs) have been suggested as a means to improve the performance of page-oriented optical memories (POMs). With a SPA interface, the optical data page is received by 2D arrays of 'smart' photodetector elements, replacing conventional CCDs. By integrating on-chip processing with the detector array, SPAs can be designed to perform fast parallel error control and data reduction, thereby providing a more efficient interface between the POM and the electronic host computer. In this paper, we discuss SPA requirements in terms of performance, power and scalability. We then present our design and analysis of a 0.35 micron CMOS smart photodetector array. Our implementation integrates a differential current receiver for optoelectronic signal conversion with a cluster error correction code. This approach provides for high optical sensitivity, low electrical power, and fast parallel error correction to achieve data rates of 10s to 100s of Gbits per second.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
X-ray microscopy inherently possesses characteristics complementary to optical and electron microscopy. Short wavelength x-ray radiation, especially in the so-called 'water window', permits a twenty-fold improvement in spatial resolution over optical microscopy while preserving a depth of field large enough to image whole biological specimens int heir natural state. Whereas electron microscopy can access atomic-scale resolution,this can only be applied to biological and medical specimens at the expense of detrimental preparation procedures that preclude real-time analysis of structural changes in living organisms. We describe progress being made in an x-ray imaging technology that provides high-resolution single frame x-ray images of in-vitro specimens captured in a time sufficiently short that any radiation damage mechanisms to the structure are not recorded. Several different biology and medical research groups find this type of microscopy particularly well-suited to the detailed analysis of sub-cellular features, and to the study of live organisms subjected to various forms of external stimuli. This technology utilizes bright x-ray sources produced by compact pulse laser systems. The incorporation of advanced x-ray optical and electron-optical systems will lead to the development of a compact, real-time x-ray microscope, having a broad range of applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Katherine N. Scott, David C. Wilson, Angela P. Bruner, Teresa A. Lyles, Brandon Underhill, Edward A. Geiser, J. Ray Ballinger, James D. Scott, Christine B. Stopka
A major problem of P-31 nuclear magnetic spectroscopy (MRS) in vivo applications is that when large data sets are acquired, the time invested in data reduction and analysis with currently available technologies may totally overshadow the time required for data acquisition. An example is out MRS monitoring of exercise therapy for patients with peripheral vascular disease. In these, the spectral acquisition requires 90 minutes per patient study, whereas data analysis and reduction requires 6-8 hours. Our laboratory currently uses the proprietary software SA/GE developed by General Electric. However, other software packages have similar limitations. When data analysis takes this long, the researcher does not have the rapid feedback required to ascertain the quality of data acquired nor the result of the study. This highly undesirable even in a research environment, but becomes intolerable in the clinical setting. The purpose of this report is to outline progress towards the development of an automated method for eliminating the spectral analysis burden on the researcher working in the clinical setting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As technology advances and population increases, our planet is changing more rapidly than ever before. Today, land managers and policy makers struggle to understand the effects of change and its implications. Now, more than ever before, decision makers need timely and accurate information and tools to analyze information in meaningful ways.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
I would like to open my presentation by referring to the main task that the mankind is facing today. This task involves not only survival of the present generation but providing for the next generations to come. Our generation and generations after us should realize that the environment could be our best friend only if we have enough strength to admit that we don't know much about the world around us and the environment we're living in. An alliance between the mankind and the environment could be accomplished only through mutual respect, understanding and knowledge about each other. I believe we will be able to understand nature and nature will reciprocate. Presently, ecology and environmental science are just beginning to be outlined. Environmental science is currently trying to determine its place among other science branches. Scientists all over the world are using the experience gained by other advanced science disciplines to attempt to apply such knowledge to such a difficult subject matter as nature and the environment around us
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral imagery, i.e., imagery with more than a hundred spectral bands, is particularly useful for material identification. Since each pixel is a spectral signature, comparing that signature with a library of signatures for known materials allows each pixel's material to be identified as the one with the closest match. The word 'match', of course, must be defined since many measures of matching are used. This material identification process becomes considerably less straightforward, however, when the pixel on the ground includes multiple materials; then the pixel is 'mixed' and no one library signature will match. Rather, a sum of library signatures, with appropriate coefficients of proportionality, that matches the pixel's signature must be determined. The determination of these coefficients of proportionality is termed 'unmixing'. A variety of unmixing methods have been developed and are reported in the literature. This paper addresses a new algorithm based on linear programing (LP), an optimization method borrowed from operations research. Sophisticated LP software is currently available for virtually every computer. The paper is said to be an approach, since the method has not been evaluated to date on real hyperspectral imagery and no claims may yet be made for its performance, although such test and evaluation activities are planned using AVIRIS data from the Jet Propulsion Laboratory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present initial results from an automated terrain classification algorithm that utilizes the information collected by an interferometric synthetic aperture radar system. It is shown that by combining radar cross section imagery with height maps additional information concerning terrain types can be extracted automatically, with the height information differentiating forests from other terrain classes and the radar cross section information differentiating field types. When tested on the same data it was trained on classification accuracy of 94 percent to 100 percent are shown. Similar results are generated when tested on a different data set, although to date this has only been determined with visual comparisons. The largest problem with the algorithm is the use of absolute height information that confuses high fields with forests. Work is on-going to develop relative height measures to improve the robustness of the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper two complementary approaches for object motion tracking are presented. The objects considered, highly deformable, are rain cells issued form weather radar image sequences. The first approach consists in formulating the problem as an object-to-object assignment one, treated as a combinatorial optimization problem and solved by ad hoc approximate heuristics. The second approach aims at implicitly identifying the motion components and predicting their future evolution by linear extrapolation. Experimental results are presented, pointing out the successful tracking of rain cells, possibly leading to the prediction of heavy rainfall.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a technique for using geometric information contained in 2D images to search large databases of 3D models in the special case where the geometric information consists of finite point configurations. This technique exploits certain polynomial relations known as object/image equations between invariant coordinates assigned to 2D and 3D feature sets. The resulting scheme is invariant to changes in scale and perspective. Techniques for constructing indexes based on this technology as well as experimental results on numerical stability will also be described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the preprocessing stage of a surface mounted device (SMD) image classification system (ICS) is presented for improvement. The ICS uses two images of the same scene taken using two different types of illuminations, top and side. For each scene, we encounter one of three different cases: SMD present, SMD absent with a speck of glue, and SMD absent with no speck of glue. After areas of improvement to the ICS are identified, a methodology is presented to define and evaluate preprocessing methods. The methodology first defines criteria to improve the images for a fixed case and from it two new methods, the NewC and NewB, are proposed for preprocessing. The existing system, which uses a simple subtraction operation to combine the images, is also evaluated since it serves as a reference. The NewC method applies edge enhancement techniques to the available images before subtraction and the NewB method uses only the side illuminated image using also edge enhancement. The methodology then requires the development of image measurement descriptors that are computed for each preprocessing output image for the SMD present cases in the training database. Some descriptors use the Radon Transform to describe SMD edges in the images and others use energy or a signal-to-noise ratio measure. From the descriptors, image improvement indicators are developed and computed. These are statistical measures of data dispersion applied to the distributions of one or more descriptors and which allow us to assess preprocessing systems. Both NewC and NewB methods are clearly shown to be superior to the ImageC method. The NewC and NewB methods are clearly shown to be superior to the ImageC method. The NewC method is slightly better than the NewB method indicating that very little use is made of the information in the top illuminated image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objective of this paper is to do fast image registration by progressively registering wavelet representations at different resolutions. It is well known that some subbands of wavelet representations are sensitive to small translations of an image. The energy in some subbands of wavelet representations are sensitive to small translations of an image. The energy in some subbands can move entirely out of the subband into other subbands when forming wavelet representations of shifted versions of an image. This seems to preclude the practical use of wavelet representations for image registration. In this paper we show that registration with wavelet techniques can be done effectively, and show this by examining the sensitivity of wavelet representations to image translations. Experiments on general mathematical modes and actual satellite data produced these findings: 1. Registration using a wavelet with a block size of B pixels is robust under image translation for features that extend at least 2B pixels. Distortion from translation causes the peak correlation in the low-pass subband to vary only between roughly 0.8 and 1.0, with an average in excess of 0.9 for features larger than 2B. 2. The high-pass band is less robust than the low-pass subband. Edge information at low resolution can best be used by using the high-pass subband of a low-resolution image created by recursive low- pass filtering of the original image. For this case, translation distortion drops the correlation to a range from 0.2 to 0.8, which is low in some cases but still useful on average for registration. 3. There is little difference in correlations produced from Haar and Daubechies wavelet transforms. 4. The mathematical model and the experiments with areal image gave consistent results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In classification, the goal is to assign an input vector to a discrete number of output classes. Classifier design has a long history and they have been put to a large number of uses. In this paper we continue the task of categorizing classifiers by their computational complexity as begun. In particular, we derive analytical formulas for the number of arithmetic operations in the probabilistic neural network (PNN) and its polynomial expansion, also known as the polynomial discriminant method (PDM) and the mixture model neural network (M2N2). In addition we perform tests of the classification accuracy of the PDM with respect to the PNN and the M2N2 find that all three are close in accuracy. Based on this research we now have the ability to choose one or the other based on the computational complexity, the memory requirements and the size of the training set. This is a great advantage in an operational environment. We also discus the extension of such methods to hyperspectral data and find that only the M2N2 is suitable for application to such data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral image sensors provide images with a large number of contiguous spectral channels per pixel and enable information about different materials within a pixel to be obtained. The problem of spectrally unmixing materials may be viewed as a specific case of the blind source separation problem where data consists of mixed signals and the goal is to determine the contribution of each mineral to the mix without prior knowledge of the minerals in the mix. The technique of independent component analysis (ICA) assumes that the spectral components are close to statistically independent and provides an unsupervised method for blind source separation. We introduce contextual ICA in the context of hyperspectral data analysis and apply the method to mineral data from synthetically mixed minerals and real image signatures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new approach is proposed for extracting explicit representations of 3D curvilinear features form stacks of 2D images. The images, which are of brain tissue, were obtained by confocal microscopy and the features represent the dendritic tree structure surrounding a neuron. Voxels with a high probability of being on the center-lines of the dendrites are identified first. Then a combination of a 3D minimum spanning tree and a 3D minimum cost path algorithm is used to automatically extract explicit center-line representations of the curvilinear features. The final objective of the image analysis is to produce, as automatically as possible, generalized cylinder models of the dendritic structures which are then used for studying neuronal morphology and function. In this paper, we concentrate on the algorithms used to extract the center- line representation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
MRI has made significant advances since its introduction over fifteen years ago. The technology has been driven by a combination of higher magnetic fields, more efficient pulse sequence design and technical advances in transducer technology associated with the capture of weak magnetic resonance signals. This paper explores those advances with particular emphasis on state of the art high field MRI systems and the latest radio frequency (RF) transducers or RF coils as they are commonly referred to. The design and construction of large bore magnets operating at high magnetic fields has been the special purview of a limited number of engineering companies while the design and construction of RF coils has been addressed by a wider range of physicist and engineers working at major universities as well as those engineers working within industry. Our work at the University of Florida has been mainly focused on developing these RF coils to address the unique problems presented by operating at high magnetic fields and frequencies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development and evaluation of a new class of algorithms for computer assisted diagnostic (CAD) methods for segmentation and detection of masses in digitized mammograms is reported. Both non-adaptive and adaptive methods are reported that employ two key novel CAD modules, specifically tailored for digital mammography, namely: (a) a multiorientation directional wavelet transform for removal of directional features and for the direct detection of speculations for spiculated lesions, and (b) a multiresolution wavelet transform for image enhancement to improve the segmentation of suspicious areas. The aim of the work is to provide a brief overview of both the non-adaptive and adaptive methods and comparison of their performance using computer ROC curves. An image data base containing regions of interest (ROI), enclosing all mass types and normal tissues, was used for the relative comparison of the performance, where electronic ground truth was established. The result confirm the importance of using adaptive CAD methods that should potentially allow a more generalized and robust application for larger image data bases, images generated from different sensors, or direct X-ray detection, as required for clinical trials and teleradiology applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes our work in enhancing and analyzing digital mammograms form the digital database for screening mammography (DDSM). The DDSM will ultimately contain 3000 cases and provides a unique opportunity for researchers form around the world to compare results on a large, diverse data set. However, the size of the database and images within it require careful consideration of memory limitation issues, display device constraints, etc. We address research problems connected with the modification and application of existing fuzzy modeling approaches to this digital mammography domain. Segmentation and edge detection are sued as benchmark applications for the comparisons we make.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The theory of the Radon transform forms the foundation for problems of reconstruction from projections. For example, in computerized tomography (CT), the raw data can be identified with the Radon transform of the image. The desired image is found by applying the inverse Radon transform to the projection data. In cases where it is desired to image a local region that is small in comparison to the entire image, there is a problem due to the nature of the global properties of the in verse Radon transform in 2D. From a practical point of view this means we must have projection data for regions that are not in the region of interest (ROI) in order to stabilize the inversion process that yields the ROI. Introduction of the wavelet transform as an intermediate part of the inversion leads to an important improvement in this procedure. It is possible to devise algorithms such that significantly less radiation exposure is required without causing a noticeable degradation of the image in the ROI. The key is to make use of wavelets with several vanishing moments and to do appropriate sparse sampling away from the ROI. A review of Radon transform inversion is discussed for three major inversion algorithms, and a brief summary of wavelets is given. The current situation on wavelet based Radon transform inversion is reviewed along with potential applications to CT, limited angle CT, and single photon emission computed tomography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Approximately 50 percent of breast cancers are detected on the basis of calcifications alone. Regrettably, the presence of such calcifications is non-specific; only 30 percent of biopsies based on suspicious calcifications are malignant. We have investigated three methods (LVR) for 3D imaging and analysis of microcalcifications. Our aim is to increase specificity by more accurately distinguishing between calcifications indicative of benign and malignant breast lesions. We have demonstrated that 3D imaging of calcifications is possible using an LVR technique that includes semi-automated segmentation, correlation, and reconstruction of the calcifications. A clinical study of he LVR method is ongoing in which 2D film and digital images are compared to 3D images. The images are evaluated using a rating of 1 to 5, where 1 equals definitely benign, 5 equals definitely malignant, and a score of 3 or higher requires biopsy. To date, 3 radiologists have evaluated the images of 44 patients for which biopsy results were available. The use of 2D and 3D digital images resulted in doubling the diagnostic accuracy from 36 percent to 77 percent. Comparison to other techniques is ongoing. Additionally, a high resolution CT scanner for breast tissue specimens is under construction for comparison of the reconstructed images to a 'gold standard'.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new corneal topography system is described which combines proven grid projection and stereo triangulation techniques with an innovative user interface which simplifies the data capture process. Principles of the imaging, measurement, and calibration processes used with the system are presented. The device generates a complete topographic model of the anterior corneal surface with spatial resolution of 0.2 millimeters and elevation accuracy of 2 microns. System applications include pre- and post-operative assessment of refractive surgery patients, contact lens fitting including specification of custom RGP lenses, and excimer surgery planning and simulation. The innovative features of the system are described along with preliminary results of accuracy evaluations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a rotation invariant fingerprint identification system is implemented by using the circular harmonic filter and the binary phase extraction joint transform correlator. We present that this system has the shift and rotation invariant properties and can recognize the fingerprint in real-time. The complex circular harmonic filter which is used to obtain the rotation invariance, is converted into the real-valued filter for real-time implementation. Through computer simulation, we also show that this system has a good performance in the rotated fingerprints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Representation of crime scenes as virtual reality 3D computer displays promises to become a useful and important tool for law enforcement evaluation and analysis, forensic identification and pathological study and archival presentation during court proceedings. Use of these methods for assessment of evidentiary materials demands complete accuracy of reproduction of the original scene, both in data collection and in its eventual virtual reality representation. The recording of spatially accurate information as soon as possible after first arrival of law enforcement personnel is advantageous for unstable or hazardous crime scenes and reduces the possibility that either inadvertent measurement error or deliberate falsification may occur or be alleged concerning processing of a scene. Detailed measurements and multimedia archiving of critical surface topographical details in a calibrated, uniform, consistent and standardized quantitative 3D coordinate method are needed. These methods would afford professional personnel in initial contact with a crime scene the means for remote, non-contacting, immediate, thorough and unequivocal documentation of the contents of the scene. Measurements of the relative and absolute global positions of object sand victims, and their dispositions within the scene before their relocation and detailed examination, could be made. Resolution must be sufficient to map both small and large objects. Equipment must be able to map regions at varied resolution as collected from different perspectives. Progress is presented in devising methods for collecting and archiving 3D spatial numerical data from crime scenes, sufficient for law enforcement needs, by remote laser structured light and video imagery. Two types of simulation studies were done. One study evaluated the potential of 3D topographic mapping and 3D telepresence using a robotic platform for explosive ordnance disassembly. The second study involved using the laser mapping system on a fixed optical bench with simulated crime scene models of the people and furniture to assess feasibility, requirements and utility of such a system for crime scene documentation and analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel cylindrical millimeter-wave imaging technique has been developed at the Pacific Northwest National Laboratory for the detection of metallic and non-metallic concealed weapons. This technique uses a vertical array of millimeter- wave antennas which is mechanically swept around a person in a cylindrical fashion. The wideband millimeter-wave data is mathematically reconstructed into a series of high- resolution images of the person being screened. Clothing is relatively transparent to millimeter-wave illumination,whereas the human body and concealed items are reflective at millimeter wavelengths. Differences in shape and reflectivity are revealed in the images and allow a human operator to detect and identify concealed weapons. A full 360 degree scan is necessary to fully inspect a person for concealed items. The millimeter-wave images can be formed into a video animation sequence in which the person appears to rotate in front of a fixed illumination source.This is s convenient method for presenting the 3D image data for analysis. This work has been fully sponsored by the FAA. An engineering prototype based on the cylindrical imaging technique is presently under development. The FAA is currently opposed to presenting the image data directly to the operator due to personal privacy concerns. A computer automated system is desired to address this problem by eliminating operator viewing of the imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of photographs for the interpretation of patterned injuries of the skin is a skill much like that of photograph interpretation for other uses. As such, the disciplines may well benefit from technology transfer. A case report is presented to illustrate the interpretive process used to estimate the 3D structure of an object used to bludgeon a victim is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Collaborative communications can be a very effective tool in the support of tactical operations, especially when graphics, text and voice are supported. Tactical communication networks often include several low-speed digital radio links. The transmission rate is often limited to 2400 bits per second, which makes it difficult to use graphics in interactive communications, RADIOContact is a prototype whiteboard application that operates efficiently over a network that uses the TCP/IP protocols even when the network contains low-speed links. The application was developed in Java and utilizes distributed object libraries to achieve efficient distribution and management of graphics and text communication objects. The prototype system has been implemented and demonstrated on a heterogeneous network that includes multiple low-speed radio links. It is a demonstration that effective whiteboard collaboration can be supported in such an environment. This paper presents user requirements and design issues for such applications, and describes the design of the distributed system to meet the tactical user requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the design and implementation of a real-time, streaming, Internet video and audio player. The player has a number of advanced features including dynamic adaptation to changes in available bandwidth, latency and latency variation; a multi-dimensional media scaling capability driven by user-specified quality of service (QoS) requirements; and support for complex content comprising multiple synchronized video and audio streams. The player was developed as part of the QUASAR project at Oregon Graduate Institute, is freely available, and serves as a testbed for research in adaptive resource management and QoS control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
W4 is a real time visual surveillance system for detecting and tracking people and monitoring their activities in an outdoor environment. It operates on monocular grayscale video imagery, or on video imagery from an IR camera.Unlike many of systems for tracking people, W4 makes no use of color cues. Instead, W4 employs a combination of shape analysis and tracking to create models of people's appearance so that they can be tracked through interactions such as occlusions. W4 is capable of simultaneously tracking multiple people even with occlusion. It runs at 20 Hz for 320 X 240 resolution images on a dual-pentium 200 PC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We integrated a practical digital video database system based on language and image analysis with components from digital video processing, still image search, information retrieval, closed captioning processing. The attempt is to utilize the multiple modalities of information in video and implement data fusion among the multiple modalities; image information, speech/dialog information, closed captioning information, sound track information such as music, gunfire, explosion, caption information, motion information, temporal information. Effort is made to allow access video contents at different levels including video program level, scene level, shot level, and object level. Approaches of browsing, subject-based classification, and random retrieving are available to gain access to the contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper an efficient image matching algorithm is presented for use in aircraft navigation. A sequence images with each two successive images partially overlapped is sensed by a monocular optical system. 3D undulation features are recovered from the image pairs, and then matched against a reference undulation feature map. Finally, the aircraft position is estimated by minimizing Hausdorff distance measure. The simulation experiment using real terrain data is reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We formulate an error function for the supervised learning of image search/detection tasks when the positions of the objects to be found are uncertain or ill-defined. The need for this uncertain object position (UOP) error function arises in at least two ways. First, point-like objects frequently have positions that are inaccurately specified. We illustrate this with the problem of detecting microcalcifications in mammograms. The second type of position uncertainty occurs with extended objects whose boundaries are not accurately defined. In this case we usually only need the detector to respond at one pixel within each object. As an example of this, we present results for neural networks trained to detect clusters of buildings in aerial photographs. We are currently applying the UOP error function to the detection of masses in mammograms, which also have poorly-defined boundaries. In all of these examples, neural networks trained with the UOP error function perform much better than networks trained with the conventional cross-entropy error function.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using a combination of strategies real time imaging weapons systems are achieving their goals of detecting their intended targets. The demands of acquiring a target in a cluttered environment in a timely manner with a high degree of confidence demands compromise be made as to having a truly automatic system. A combination of techniques such as dedicated image processing hardware, real time operating systems, mixes of algorithmic methods, and multi-sensor detectors are a forbearance of the unleashed potential of future weapons system and their incorporation in truly autonomous target acquisition. Elements such as position information, sensor gain controls, way marks for mid course correction, and augmentation with different imaging spectrums as well as future capabilities such as neural net expert systems and decision processors over seeing a fusion matrix architecture may be considered tools for a weapon system's achievement of its ultimate goal. Currently, acquiring a target in a cluttered environment in a timely manner with a high degree of confidence demands compromises be made as to having a truly automatic system. It is now necessary to include a human in the track decision loop, a system feature that may be long lived. Automatic Track Recognition will still be the desired goal in future systems due to the variability of military missions and desirability of an expendable asset. Furthermore, with the increasing incorporation of multi-sensor information into the track decision the human element's real time contribution must be carefully engineered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Global positioning system (GPS) receivers and inertial measurements units (IMU) are being integrated with image sensors. Results of this integration provide measurements on the position and attitude of the sensor. These measurements could replace the last squares method traditionally used to solve for the position and attitude. Direct measurements of position and attitude provide easier exploitation of imagery. Image mosaics are easier to build, digital terrain elevation data can be generated and image registration is improved. This paper will provide results of using a GPS, IMU image sensor. Imagery was acquired from a Kodak 460 color IR professional digital camera nd form three individually filtered progressive-scan video cameras. GPS and IMU measurement were collected at the time of image acquisition. The image sensors, GPS and IMU equipment were flown on board a Cessna 172 aircraft. The imagery was automatically exploited to produce mosaics and to manually derive digital terrain elevation data. An optical flow technique was investigated to automate derivation of digital terrain elevation data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Shape geometric invariant play an important role in model- based vision (MBV). However, in many MBV scenarios, shape information may not be sufficiently reliable and hence other types of invariant need to be considered. This paper addresses motion-based classification of objects based on unique motion or activity characteristics in long-sequence of images. To date, the techniques developed in motion-based recognition are inherently sensitive to (a) object's shape, (b) Euclidean group actions and (c) time scale, i.e., velocity and acceleration of motion. We propose the development of a set of motion-based invariant that capture geometric aspects of object's kinematic constraints during distinctive motions and activities. Algebraic and differential invariant of curves and surfaces in a projective space, the kinematic image space, are proposed for motion and activity classification. The proposed approach established parallelism between space and motion geometric invariance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.