PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
In this paper we present a local force model and its integration in a hierarchical analysis of the estimation of the left ventricle motion over a cardiac cycle. The local force model is derived from the dynamics of independent point masses driven by local constant forces over a short time. A force field is assumed to be constant over short periods of time. This force drives independent point masses within a regional patch of the left ventricle surface from one time instant to another. The trajectory that minimizes the energy required to move the point mass from one surface to another is considered as the local displacement vector. This estimated trajectory takes into account surface constraints and previous estimations derived from the volumetric image sequences so that the point masses travel along smooth trajectories resembling the realistic left ventricle surface dynamics. This proposed model is able to recover the point correspondence of the nonrigid motions between consecutive frames when the surfaces and the initial conditions of left ventricle at consecutive time frames are given. The local force model is incorporated in a hierarchical analysis scheme providing us with the complete dynamics of the left ventricle as compared to the local kinematic analysis of previous approaches. Experimental results based on synthetic and real left ventricle CT volumetric images show that the proposed scheme is very promising for cardiac analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a Laplacian matched filter based approach for small object detection using gray scale morphological filtering combined with wavelet-based multiresolution analysis. This multiresolution matched filter based detection includes two stages: prewhitening processing and matched filter detection fusion. The gray scale morphological filters are used as prewhitening filters. The wavelet transform relates directly the Laplacian matched filters with multiresolution analysis. Preliminary tests of a small object detection on simulated narrow band clutter and microcalcification detection from mammographic images show that the proposed approach is capable of a tool for small object detection without explicit assumptions about image background and noise statistics. A general form for whitening filtering and adaptive thresholding unified as the local operation transformation (LOT) is also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe an active contour based on the elliptical Fourier series, and its application to matrix-array ultrasound. Matrix-array ultrasound is a new medical imaging modality that scans a 3D-volume electronically without physically moving the transducer, allowing for real-time continuous 3D imaging of the heart. Unlike other 3D ultrasound modalities which physically move a linear array, matrix array ultrasound is rapid enough to capture an individual cardiac cycle, yielding a temporal resolution of 22 volumetric scans per second. With the goal of automatically tracking the heart wall, an active contour has been developed using the elliptical Fourier series to find perpendicular lines intersecting an initial contour. The neighborhood defined by these perpendiculars is mapped into a rectangular space, called the 1D swath, whose vertical axis represents the inside-vs.-outside dimension of the contour (along the perpendicular), and whose horizontal axis represents parametric distance along the contour (tangent to the contour). A dynamic programming technique is then used to find the optimum error function traversing the rectangle horizontally, and this error function is mapped back into image space to yield a new contour. The method does not iterate, but rather simultaneously searches for the optimum contour within a limited domain. Results are presented applying the technique to 3D ultrasound images of in vivo hearts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the amount of multidimensional remotely sensed data is growing tremendously, Earth scientists need more efficient ways to search and analyze such data. In particular, extracting image content is emerging as one of the most powerful tools to perform data mining. One of the most promising methods to extract image content is image classification, which provides a labeling of each pixel in the image. In this paper, we concentrate on neural network classifiers and we show how information obtained with a wavelet transform can be integrated to such a classifier. We apply a local spatial frequency analysis, a wavelet transform, to account for statistical texture information in Landsat/TM imagery. Statistical texture is extracted with a continuous edge-texture composite wavelet transform. We show how this approach relates to texture information computed from a co-occurrence matrix. The network is then trained with both the texture information and the additional pixel labels provided by the ground truth data. Theory and preliminary results are described in this paper. In future work, this pre-processing will automatically generate a texture index which will be fed into the artificial neural network, thus providing better discriminants of Landsat/TM imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The need for fast, accurate, and reliable image registration techniques is increasing primarily due to the large amount of remote sensing data which will be generated by future Earth and space missions and the diversity of such data in temporal, spatial and spectral components. Registration of the remote sensing imagery is one of the most important steps in view of further processing and interpretation of such data since the information fusion from multiple sensors start with the registration of the data. Traditional approaches to image registration require substantial human involvement in the selection and matching of the ground control points in the reference and input data sets. Considering the dramatic increase that is predicted in the volume of remote sensing data that will be collected during future missions, it is imperative that fully automatic registration algorithms be utilized. We present a three-step approach to automatic registration of remote sensing imagery. The first step involves the wavelet decomposition of the reference and input images to be registered. In the second step, we extract domain independent features to be used as the control points from the low-low components of the wavelet decompositions of the reference and input images employing the Lerner algebraic edge detector (LAED) and the Sobel edge detector. Finally, we utilize the maxima of the low-low wavelet coefficients preprocessed by the edge detectors and an exclusive-or based similarity metric to compute the transformation function. We illustrate the effectiveness of the proposed registration method on a Landsat thematic mapper image of the Pacific Northwest, and show that the performance of the LAED is superior to that of the Sobel edge detector.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the importance of a maximum entropy formulation for the extraction of content from a single picture element in a remotely sensed image. Most conventional classifiers assume a winner take all procedure in assigning classes to a pixel whereas in general it is the case that there exists more than one class within the picture element. There have been attempts to perform spectral unmixing using variants of least-squares techniques, but these suffer from conceptual and numerical problems which include the possibility that negative fractions of ground cover classes may be returned by the procedure. In contrast, a maximum entropy (MAXENT) based approach for sub-pixel content extraction possesses the useful information theoretic property of not assuming more information than is given, while automatically guaranteeing positive fractions. In this paper we apply MAXENT to obtain the fractions of ground cover classes present in a pixel and show its clear numerical superiority over conventional methods. The optimality of this method stems from the combinatorial properties of the information theoretic entropy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reviews the activities at OKSI related to imaging spectroscopy presenting current and future applications of the technology. We discuss the development of several systems including hardware, signal processing, data classification algorithms and benchmarking techniques to determine algorithm performance. Signal processing for each application is tailored by incorporating the phenomenology appropriate to the process, into the algorithms. Pixel signatures are classified using techniques such as principal component analyses, generalized eigenvalue analysis and novel very fast neural network methods. The major hyperspectral imaging systems developed at OKSI include the intelligent missile seeker (IMS) demonstration project for real-time target/decoy discrimination, and the thermal infrared imaging spectrometer (TIRIS) for detection and tracking of toxic plumes and gases. In addition, systems for applications in medical photodiagnosis, manufacturing technology, and for crop monitoring are also under development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The motivation behind this research has been to identify, and where possible, minimize or eliminate potential problem areas facing NASA in its mission of gathering and analyzing remotely-sensed imagery in both Earth and space disciplines. Managing and extracting useful information from the massive image databases resulting from such missions is a challenging task for NASA. The key to real-time archival and retrieval of this massive image data lies in the notion of content based image data management. Two major steps are involved in this process. The first one is to automatically extract image content or meta-data from satellite imagery. The second one is to organize this database to permit users from numerous disciplines and communities to access data relevant to their needs. Accordingly, each data set is indexed in multiple ways, enabling users to retrieve images by specifying constraints over a combination of attributes. One such method provides users the ability to search the data holdings using a metric of similarity in content so that neighboring images in the database have a high probability of hit when queried for a specific type of meta- data content. An important area of research is therefore to compte and evaluate similarity measures for images. In this paper, we present a back propagation neural network based technique to classify a multispectral satellite image and extract a similarity measure using the meta-data classification content.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ken will talk about his experiences as an end user of technology. Even moderate technological progress in the field of pattern recognition and artificial intelligence can be, often surprisingly, of great help to the blind. An example is the providing of portable bar code scanners so that a blind person knows what he is buying and what color it is. In this age of microprocessors controlling everything, how can a blind person find out what his VCR is doing? Is there some technique that will allow a blind musician to convert print music into midi files to drive a synthesizer? Can computer vision help the blind cross a road including predictions of where oncoming traffic will be located? Can computer vision technology provide spoken description of scenes so a blind person can figure out where doors and entrances are located, and what the signage on the building says? He asks 'can computer vision help me flip a pancake?' His challenge to those in the computer vision field is 'where can we go from here?'
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
From the viewpoint of the economic growth theorist, the broad social impact of improving computer vision should be to improve people's material well-being. Developing computer vision entails building knowledge of perception and interpretation into new devices which enhance the scope and depth of human capability. Some worry that saving lives and replacing tedious jobs through computer vision will burden society with increasing population and unemployment; such worries are unjustified because humans are 'the ultimate resource.' Because development of computer vision has costs as well as benefits, developers who wish to have a positive social impact should pursue projects that promise to pay off in the open market, and should seek private instead of government funding as much as possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A hard real time vision system has been developed that recognizes and tracks multiple cars from video sequences taken from a car driving on highways and country roads. Recognition is accomplished by combining the analysis of single image frames with the analysis of the motion information provided by multiple consecutive image frames. In single image frames, cars are recognized by matching deformable gray-scale templates, by detecting image features, such as corners, and by evaluating how these features relate to each other. Cars are also recognized by tracking motion parameters that are typical for cars. The vision system utilizes the hard real-time operating system Maruti which guarantees that the timing constraints on the various vision processes are satisfied. The dynamic creation and termination of tracking processes optimizes the amount of computational resources spent and allows fast detection and tracking of multiple cars. Experimental results demonstrate robust, real-time recognition and tracking over thousands of image frames.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe the construction and performance of an acousto- optic tunable filter (AOTF) camera system for obtaining images in the range of 450 to 1000 nm. A combination of a 10-bit digital video camera and a high speed frame grabber board allows continuous display of high-resolution, filtered images on a computer monitor at 30 frames per second. Ability for target recognition is significantly enhanced by processing filtered images. A typical speed in a basic operation that requires two frame grabs at two different filter settings and image processing is currently limited to 6 frames per second. The pre-processing of the target image by the AOTF simplifies subsequent image processing and is nearly real time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous driving provides an effective way to address traffic concerns such as safety and congestion. There has been increasing interest in the development of autonomous driving in recent years. Interest has included high-speed driving on highways, urban driving, and navigation through less structured off-road environments. The primary challenge in autonomous driving is developing perception techniques that are reliable under the extreme variability of outdoor conditions in any of these environments. Roads vary in appearance. Some are smooth and well marked, while others have cracks and potholes or are unmarked. Shadows, glare, varying illumination, dirt or foreign matter, other vehicles, rain, and snow also affect road appearance. This paper describes a visual processing algorithm that supports autonomous driving. The algorithm requires that lane markings be present and attempts to track the lane markings on each of two lane boundaries in the lane of travel. There are three stages of visual processing computation: extracting edges, determining which edges correspond to lane markers, and updating geometric models of the lane markers. A fourth stage computes a steering command for the vehicle based on the updated road model. All processing is confined to the 2-D image plane. No information about the motion of the vehicle is used. This algorithm has been used as part of a complete system to drive an autonomous vehicle, a high mobility multipurpose wheeled vehicle (HMMWV). Autonomous driving has been demonstrated on both local roads and highways at speeds up to 100 kilometers per hour (km/h). The algorithm has performed well in the presence of non-ideal road conditions including gaps in the lane markers, sharp curves, shadows, cracks in the pavement, wet roads, rain, dusk, and nighttime driving. The algorithm runs at a sampling rate of 15 Hz and has a worst case processing delay time of 150 milliseconds. Processing is implemented under the NASA/NBS Standard Reference Model for Telerobotic Control System Architecture (NASREM) architecture and runs on a dedicated image processing engine and a VME-based microprocessor system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a system for performing real-time vehicular self-location through a combination of triangulation of target sightings and low-cost auxiliary sensor information (e.g. accelerometer, compass, etc.). The system primarily relies on the use of three video cameras to monitor a dynamic 180 degree field of view. Machine vision algorithms process the imagery from this field of view searching for targets placed at known locations. Triangulation results are then combined with the past video processing results and auxiliary sensor information to arrive at real-time vehicle location update rates in excess of 10 Hz on a single low-cost conventional CPU. To supply both extended operating range and nighttime operational capabilities, the system also possesses an active illumination mode that utilizes multiple, inexpensive infrared LEDs to act as the illuminating source for reflective targets. This paper presents the design methodology used to arrive at the system, explains the overall system concept and process flow, and will briefly discuss actual results of implementing the system on a standard commercial vehicle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This presentation highlights needs for autonomous video surveillance in the context of physical security for office buildings and surrounding areas. Physical security is described from an operational perspective, defining the principal responsibilities and concerns of a physical security system. Capabilities and limitations of current video surveillance technology are described, followed by examples of how computer vision techniques are being used and advanced for autonomous video surveillance systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Research and development for image understanding systems (RADIUS), a two-phase five-year project, was aimed at increasing imagery analyst (IA) productivity, and improving the quality and timeliness of IA products. A key feature of RADIUS was model-supported exploitation (MSE) in which two- dimensional and three-dimensional models for a site are used as the basis of much of the subsequent analysis. Image understanding (IU) techniques figure strongly in this analysis. This paper describes the challenges faced in the three-year phase II RADIUS testbed system (RTS) development. The development, testing, evaluation, and incremental enhancement of the RTS are described, as the project attempted to define and meet the real-world needs of imagery interpretation and analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the last 30 years, there have been steady and reasonably extensive R&D activities applied to the problem of interpreting image data automatically. The goal of much of this research has been directed either at automatic target recognition (ATR) or at the automation of the functions of intelligence image analysis. These efforts have greatly advanced our understanding of the nature of image content and what is necessary to describe it using computer algorithms. On the other hand, the goal of automatic interpretation of complex scenes and the recognition of targets in the presence of clutter and occlusion is still not formalized at the level of an engineering discipline. This paper reviews what the key issues are in achieving the automation of image understanding and provides examples of both success and still-unattainable interpretation capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To enable the transition of exploitation image understanding (IU) technology into near-operational use in the intelligence community, user interfaces must be designed to allow simple, intuitive access to IU functionality and results. The complexity of IU systems, both in required inputs and processing, must be hidden from the user as much as possible to avoid heavy training costs. This paper describes some of the important user interface issues encountered when image understanding algorithms are introduced to an imagery analyst, and discusses some of the solutions that have evolved during the development of the RADIUS testbed. Significant issues we have encountered are algorithm and parameter selection, algorithm execution, visual representation of change, and display of historical results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper details recent work by our group on the use of low-level features for the identification of man-made regions in unmanned aerial vehicle (UAV) imagery. Using low- level fractal-based features, the system classifies regions in the image via probability densities estimated for each class. These densities are estimated semi-parametrically, giving the system great flexibility in the functional form of the densities. This paper details some of our group's contributions to the areas of feature extraction, probability density estimation, classification, and the integration of these techniques into a user friendly environment. In addition we present some preliminary results from an ongoing large scale study involving recently collected UAV imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A model supported exploitation (MSE) system presents unique challenges to database implementation. A MSE database must be designed to store complex information such as geospatial coordinates, imagery, 3D geometric models, camera models, annotations, and support for image understanding (IU) algorithms. This paper presents concepts from the MSE database implemented within the research and development for image understanding systems (RADIUS) project. Previous papers, detail the storage objectives of RADIUS as well as general discussions of MSE database requirements. This paper explores performance enhancements to the RADIUS testbed database (RTDB) within the scope of these objectives.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A system was developed that integrates intelligent document analysis with multiple character/numeral recognition engines in order to achieve high accuracy automated financial document processing. In this system, images are accepted in both their grayscale and binary formats. A document analysis module starts by extracting essential features from the document to help identify its type (e.g. personal check, business check, etc.). These features are also utilized to conduct a full analysis of the image to determine the location of interesting zones such as the courtesy amount and the legal amount. These fields are then made available to several recognition knowledge sources such as courtesy amount recognition engines and legal amount recognition engines through a blackboard architecture. This architecture allows all the available knowledge sources to contribute incrementally and opportunistically to the solution of the given recognition query. Performance results on a test set of machine printed business checks using the integrated system are also reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the wavelet transform is used on surface mount device (SMD) images to devise a system used to inspect the presence of SMDs in printed circuit boards. The complete system involves preprocessing, feature extraction, and classification. The images correspond to three cases: SMD present (SMD), SMD not present with a speck of glue (GLUE), and SMD not present (noSMD). For each case, two images are collected using top and side illuminations but these are first combined into one image before proceeding to do further processing. Preprocessing is done by applying the wavelet transform to the images to expose details. Using 500 images for each of the three cases, various features are considered from different wavelet subbands, using one or several transform levels, to find four good discriminating parameters. Classification is performed sequentially using a two-level binary decision-tree. Two features are combined into a two-component feature vector and are fed into the first level that compares the SMD vs noSMD cases. The second level uses another feature vector produced by combining two other features and then compares the SMD and GLUE cases. The features used give no cluster overlap on the training set and simple parallelpiped classifier is devised at each level of the tree producing no errors on this set. Results give 99.6% correct classification when applied to a separate testing set consisting of 500 images for each case. All the errors are made to level 2 classifying six SMD images erroneously as GLUE.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The identification and measurement of buildings in imagery is important to a number of applications including cartography, modeling and simulation, and weapon targeting. Extracting large numbers of buildings manually can be time- consuming and expensive, so the automation of the process is highly desirable. This paper describes and demonstrates such an automated process for extracting rectilinear buildings from stereo imagery. The first step is the generation of a dense elevation matrix registered to the imagery. In the examples shown, this was accomplished using global minimum residual matching (GMRM). GMRM automatically removes y- parallax from the stereo imagery and produces a dense matrix of x-parallax values which are proportional to the local elevation, and, of course, registered to the imagery. The second step is to form a joint probability distribution of the image gray levels and the corresponding height values from the elevation matrix. Based on the peaks of that distribution, the area of interest is segmented into feature and non-feature areas. The feature areas are further refined using length, width and height constraints to yield promising building hypotheses with their corresponding vertices. The gray shade image is used in the third step to verify the hypotheses and to determine precise edge locations corresponding to the approximate vertices and satisfying appropriate orthogonality constraints. Examples of successful application of this process to imagery are presented, and extensions involving the use of dense elevation matrices from other sources are possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
By combining new media processor technology with parallel processing, Parsytec has created a whole range of new, programmable high-performance computer systems for image processing. The flexibility and scalability of this approach offers solutions for many situations in various industrial applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An approach for estimating 3D head orientation in a monocular image sequence is proposed. The approach employs recently developed image-based parameterized tracking for face and face features to locate the area in which a sub- pixel parameterized shape estimation of the eye's boundary is performed. This involves tracking of five points (four at the eye corners and the fifth is the tip of the nose). We describe an approach that relies on the coarse structure of the face to compute orientation relative to the camera plane. Our approach employs projective invariance of the cross-ratios of the eye corners and anthropometric statistics to estimate the head yaw, roll and pitch. Analytical and experimental results are reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The mission of the Department of Defense Counter-drug Technology Development Program Office's face recognition technology (FERET) program is to develop automatic face- recognition systems for intelligence and law enforcement applications. To achieve this objective, the program supports research in face-recognition algorithms, the collection of a large database of facial images, independent testing and evaluation of face-recognition algorithms, construction of real-time demonstration systems, and the integration of algorithms into the demonstration systems. The FERET program has established baseline performance for face recognition. The Army Research Laboratory (ARL) has been the program's technical agent since 1993, managing development of the recognition algorithms, database collection, and conduction algorithm testing and evaluation. Currently, ARL is managing the development of several prototype face-recognition systems that will demonstrate complete real-time video face identification in an access control scenario. This paper gives an overview of the FERET program, presents performance results of the face- recognition algorithms evaluated, and addresses the future direction of the program and applications for DoD and law enforcement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a novel approach for fully automated face recognition and show its feasibility on a large data base of facial images (FERET). Our approach, based on a hybrid architecture consisting of an ensemble of connectionist networks -- radial basis functions (RBF) -- and inductive decision trees (DT), combines the merits of 'discrete and abstractive' features with those of 'holistic' template matching.' Training for face detection takes place over both positive and negative examples. The benefits of our architecture include (1) robust detection of facial landmarks using decision trees, and (2) robust face recognition using consensus methods over ensembles of RBF networks. Experiments carried out using k-fold cross validation on a large data base consisting of 748 images corresponding to 374 subjects, among them 11 duplicates, yield on the average 87% correct match, and (ROC curves where) 99% correct verification is achieved for a 2% reject rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human identification is a two-step process of initial identity assignment and later verification or recognition. The positive identification requirement is a major part of the classic security, legal, banking, and police task of granting or denying access to a facility, authority to take an action or, in police work, to identify or verify the identity of an individual. To meet this requirement, a three-part research and development (R&D) effort was undertaken Betac International Corporation, through its subsidiaries of Betac Corporation and Technology Recognition Systems, to develop an automated access control system using infrared (IR) facial images to verify the identity of an individual in real time. The system integrates IR facial imaging and a computer-based matching algorithm to perform the human recognition task rapidly, accurately, and nonintrusively, based on three basic principles: every human IR facial image (or thermogram) is unique to that individual; an IR camera can be used to capture human thermograms; and captured thermograms can be digitized, stored, and matched using a computer and mathematical algorithms. The first part of the development effort, an operator-assisted IR image matching proof-of-concept demonstration, was successfully completed in the spring of 1994. The second part of the R&D program, the design and evaluation of a prototype automated access control unit using the IR image matching technology, was completed in April 1995. This paper describes the final development effort to identify, assess, and evaluate the availability and suitability of robust image matching algorithms capable of supporting and enhancing the use of IR facial recognition technology. The most promising mature and available image matching algorithm was integrated into a demonstration access control unit (ACU) using a state-of-the-art IR imager and a performance evaluation was compared against that of a prototype automated ACU using a less robust algorithm and a dated IR imager. The further development of this demonstration ACU will lead to the production of a commercial IR facial imaging identity verification system capable of meeting a broad range of access control and other security and law enforcement applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of conventional video cameras at standard broadcast rates permits 30 frame per second videocapture and recording. Even when moving events are recorded with fast shuttering to preclude blurring (e.g. 1000th second) each recorded consecutive stopped action still has a 33 millisecond interval between them. The reason for this is the finite time necessary to serially dump recorded information from the microchip sensor, and reinitialize the sensor for the next capture event. By designing a parallel video chip with multiple, independently capable segments for sensor input/output/re-initialization, the duration of the interval required for unloading and resetting the entire sensor is decreased by the number of discrete segments in the chip, and the number of unloading ports to transfer data. Silicon Mountain Design, Inc. (SMD) developed a 16 parallel channel output 512 by 512 by 8 bit digital video camera, and a suitable memory buffer to absorb 256 full images. This camera has the uniquely advantageous feature that no image data is absorbed while the camera discharges its image to the parallel output ports. With this parallel video camera it is possible to record events at 1000th second (or faster) continuously, or at least until the memory buffer fills. The use of structured light stereo numerical camera technology requires the collection of a series of video images, each video-image containing a different 'exposure' of an object with a different pattern of structured laser beams projected onto it. The complete series of images creates a temporal-spatial encoding of the laser beams necessary to calculate a 3-D numerical recreation of the object. By using a parallel video camera, the collection of a complete series is limited by the time it takes to expose each video-image, plus the time it takes to change the light pattern being projected. Using a rapid ferric liquid crystal electro-optic modulator with a 1 millisecond cycle time, and an SMD parallel video camera cycling at 1 millisecond, each pattern is projected and recorded in a cycle time of 1/500th second. An entire set of patterns can then be recorded within 1/60th second. This pattern set contains all the information necessary to calculate a 3-D map. The use of hyper-speed parallel video cameras in conjunction with high speed modulators enables video data rate acquisition of all data necessary to calculate numerical digital 3-D metrological surface data. Thus a 3-D video camera can operate at the rate of a conventional 2-D video camera. The speed of actual 3-D output information is a function of the speed of the computer, a parallel processor being preferred for the task. With video rate 3-D data acquisition law enforcement could survey crime scenes, obtain evidence, watch and record people, packages, suitcases, and record disaster scenes very rapidly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Naval Air Warfare Center Training Systems Division is currently involved in the commercialization of the Weapons Team Engagement Trainer (WTET) through a non traditional technology transfer program. Funded by the Office of the Secretary of Defense, the Defense Laboratory Partnership for Technology Transfer, represents an opportunity for the Department of Defense to transition research and development products to the commercial sector through partnership with industry. This effort will produce a commercially available special weapons and tactics (SWAT) team trainer that will satisfy both military and law enforcement requirements. This effort has benefits in terms of speed of transition and opportunities for user involvement that will result in a higher quality product than traditional acquisition methods. Details of the system and the transfer process are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.