PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The automatic, non-contact measurement of object surfaces is one of the most important applications of digital close-range photogrammetry. A lot of methods have been developed within the last few years. As surfaces do often not show sufficient texture, many methods are based on structured light. A new approach is presented based on this method, which is based on passive triangulation and uses the phase shift principle in combination with coded light method. The mathematical model is presented, which allows the simultaneous determination of the object surface and calibration parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The prime objective of NASA's Arctic Ice Mapping project is to provide accurate ice sheet elevation data for the purpose of change detection. The airborne laser altimetry system ATM, developed by NASA, was successfully used in three missions in Greenland. This paper provides some background information about the ATM system and describes the tests that have been carried out to derive digital elevation models and to extract ice features from the raw data. After transforming the raw data into local coordinate systems, a simple thinning scheme is applied to reduce the redundancy. The digital elevation models are derived either from the original or from the thinned data sets by planar interpolation. Six parallel strips in different areas were bridged together and the resulting elevation model was used to map ice sheet features, such as undulations and lakes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Digital Terrain Model (DTM) is one of the most elementary and important products of processing stereo image data. The 3D shape of the imaged scene is the basis for topographic mapping, orthophoto generation, geocoding multispectral image data, etc. An automatic procedure for generating DTM's from three-line imagery is under development since several years. The main features of the algorithm are feature based matching of points and edges extracted in all three channels, consistency checks in image and object space using the known orientation of the image strips, finite element modelling for surface representation and a coarse-to-fine directed processing strategy which controls the overall processing steps. As an option intensity based least squares matching is added if a most precise DTM is required. In this paper the procedure is described in detail. Processing scenes of Andes orbit 115 and Australia orbit 75B of MOMs-02/D2 mission shows that the procedure is successful even in mountainous terrain as well as in low texture scenes. Matching the three pan-chromatic stereo channels is done fast and reliable. The experimentally found height accuracy of 3D points determined by error propagation is about 10 - 15 m. This accuracy level is confirmed for the Australia scene by independent check measurements using DGPS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce non-standard methods of deriving algebraic invariants and demonstrated two types of applications of these invariants. In model transfer a collection of conjugate points are determined on a set of reference images, and `transferred' to the matching conjugate points on a new view of the 3D object, without prior computation of camera geometry or scene reconstruction. In object reconstruction, general 3D object points are represented as functions of non-coplanar fiducial points and corresponding conjugate points across multiple images. In this application the object points are `reconstructed' once quantitative values are specified for the fiducial points. The methods we introduce for deriving these invariant algorithms are extensible from the linear fractional central projection camera model to weak perspective and certain non-central projection camera models. Stability to adverse geometries and measurement error can be enhanced by using redundant fiducial points and images to determine the transfer and reconstruction functions. Extensibility and stability are indications of the robustness of these methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper introduces the extraction of objects that are higher than their surroundings, like houses, trees, or bridges, by combining the results of a segmentation of a DTM and a texture analysis of the gray image. We propose a new algorithm (called dual rank) to extract the ground in the DTM, which can be seen as an extension of the gray opening. The extraction of trees in the gray image is done by a texture transformation, which is a fast and stable method, for the distinction between houses/roads and trees. The combination of both results is done region-oriented, i.e., every high object is analyzed if it is a house, a tree, or a combination of both and thus separated and classified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an intelligent multisensor concept for measuring 3D objects using an image guided laser radar scanner. The field of application are all kinds of industrial inspection and surveillance tasks where it is necessary to detect, measure and recognize 3D objects in distances up to 10 m with high flexibility. Such applications might be the surveillance of security areas or container storages as well as navigation and collision avoidance of autonomous guided vehicles. The multisensor system consists of a standard CCD matrix camera and a 1D laser radar ranger which is mounted to a 2D mirror scanner. With this sensor combination it is possible to acquire gray scale intensity data as well as absolute 3D information. To improve the system performance and flexibility, the intensity data of the scene captured by the camera can be used to focus the measurement of the 3D sensor to relevant areas. The camera guidance of the laser scanner is useful because the acquisition of spatial information is relatively slow compared to the image sensor's ability to snap an image frame in 40 ms. Relevant areas in a scene are located by detecting edges of objects utilizing various image processing algorithms. The complete sensor system is controlled by three microprocessors carrying out the 3D data acquisition, the image processing tasks and the multisensor integration. The paper deals with the details of the multisensor concept. It describes the process of sensor guidance and 3D measurement and presents some practical results of our research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method is discussed for determining range using an image blurred by lateral translation of the camera optical axis. An optical system to accomplish optical axis translation without actual motion of the camera is described. To determine the range from motion blur, out of three different methods tested on isoplanatic surfaces, the method-of-slopes is extended to inclined plane surfaces. Tests are carried out on inclined plane and cylindrical surface. Experimental results for both ideal images and images containing Gaussian noise are discussed. It is concluded that the method-of-slopes performs adequately and that a computational scheme is appropriate for further study using more complex images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The POLDER instrument is a wide field of view radiometer designed to measure the polarization and the directionality of the solar radiation reflected by the Earth-atmosphere system, in the visible and near infrared spectrum. The original instrument concept of POLDER results in the capability of observing over a single pass any target within the instrument swath under up to 13 different viewing angles. For each viewing angle, the target is imaged in 8 narrow spectral bands, and for 3 of these channels at three different polarization angles. The multi-mission scientific objectives of POLDER lead to severe radiometric and geometrical requirements; this paper describes the POLDER instrument characteristics and the pre-flight performances measured on the flight model. Developed by CNES, the French space agency, POLDER is installed on the ADEOS platform developed by NASDA, the Japanese space agency. It will be launched in August 1996.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Area based matching of intensity images is a well known technique applied to solve various photogrammetric tasks like parallax measurement, point transfer, orientation of cameras, DTM reconstruction and others. The intensities of two or more images are the observables of a least squares estimation process which aims at deriving the parameters of a geometric model. For matching two images the most widely used geometric model is an affine mapping between local areas of the image pair. Experimentally verified is the high precision of area based matching which is about 1/10th of the pixel size. Roughly this rule of thumb holds also for the different generalizations of modelling the least squares matching problem including multi- image, object-space oriented, geometrically constrained, and other variations. Up to now only little attention has been given to the extension of the matching model to color or multispectral images. Color is generally considered to be an important clue for identification and recognition processes. The purpose of this paper is to investigate quality differences between an area based matching of color or multichannel images and images with just one channel. The formulation of multichannel image matching is presented by using a vector valued image function. For the experimental investigation aerial color images of two projects are used, one being a RGB image pair and the other being an IR image pair. The main results of this study are that (1) multichannel image matching leads to a precision often very close to that of single channel matching using the red or IR channel, respectively, or even to matching based on an intensity image derived by averaging of three channels and (2) multichannel image matching has a larger convergence radius if small mask sizes are used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Modular Optoelectronic Multispectral Scanner (MOMS-02) has been flown successfully on the German space shuttle mission D2 in April/May 1993. Its outstanding feature is along- track stereoscopic imaging. The imaging mode 3 of MOMS is combining the two off-nadir panchromatic channels with two nadir-looking color channels (red and near infrared). The ground pixel size is 13.5 X 13.5 m2, and the base to height ratio is about 0.8. The paper reports on results of the photogrammetric evaluation of MOMS mode 3 data on Mexico (D2 orbit 82) and on Ethiopia (D2 orbit 61). The evaluation is subdivided into three major steps: (1) Automatic Imaging Matching to derive large numbers of conjugate points, performed with software which has been developed at DLR for the common Indian-German stereo scanner project MEOSS. The software has been successfully applied to MEOSS airborne imagery. (2) Combined Point Determination for the reconstruction of the exterior orientation and the calculation of the ground coordinates of a subset of conjugate points using the photogrammetric bundle adjustment software CLIC developed at Technical University Munich. For Ethiopia empirical accuracies of 23 m in X, 19 m in Y and 13 m in Z were obtained using 34 independent check points. Generation of a Digital Terrain Model from a dense network of conjugate points, which previously are transformed into object space by multiple forward intersection. For the Ethiopian example the conjugate point network was densified by additional image matching. The algorithm is based on the region growing approach, proposed by Otto and Chau. A report on the results achieved is given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During the second German spacelab mission D2, successfully flown in spring 1993, the Modular Opto-electronic Multispectral Scanner MOMS-02 acquired digital high resolution, along track, 3-fold stereoscopic imagery of the earth surface. The scientific processing of the data is conducted by several German university institutes and the German Aerospace Research Establishment (DLR). The major goal is to realize the entire photogrammetric processing chain using digital MOMS-02 imagery. The emphasis of this paper is put on the photogrammetric point determination using MOMS-02 panchromatic 3-line imagery of Australia (D2-orbit 75b). Two independent sets of conjugate image points were derived at Stuttgart University using a feature based matching approach and also at the Technical University Munich using a modified Otto/Chau region growing algorithm which had previously been applied to SPOT and airborne MEOSS line scanner imagery with good success. In the bundle adjustment a refined mathematical model of the interior and exterior orientation takes into account the specific MOMS-02 geometry. After describing the derivation and characteristics of the input data the functional model is briefly summarized. The results of the bundle adjustment, as compared against 43 independent check points, are presented and discussed. An empirical standard deviation of approximately 1 pixel (13.5 m) was obtained for each coordinate. Finally, the first experience with the MOMS-02/D2 data are concluded and an outlook towards the forthcoming MOMS-2P/PRIRODA mission onboard the Russian space station MIR, scheduled for early 1996, is given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Images are degraded by both blur according to a deterministic spread function and by additive noise. The removal of blur is an ill-posed inverse problem. Therefore prior knowledge about the original scene and point spread function as well as noise information about the image forming system are necessary. In the past a lot of different algorithms have been proposed. In this paper an approach for comparison is presented and demonstrated on different example algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual reality is becoming increasingly important as a tool to provide cost-effective alternatives for training and to provide enhanced capabilities for activities, such as mission preview, planning and rehearsal. The ability to generate virtual reality utilizing a photo database or remote sensed satellite imagery is particularly of interest. The key to ensure the success of remote sensing-based virtual reality is a system that is able to quickly reconstruct a 3D scene in object space with a realistic appearance. This paper proposes a system to accomplish this task. Main issues of the system include: (1) image registration, (2) feature correspondence and extrusion, and (3) realistic 3D feature rendering. The image registration is achieved by employing a novel method based on the higher-dimension concept. To obtain a high speed, the feature correspondence is implemented using a mathematically well-defined, edge-based method in a multiresolution scheme. Realistic 3D feature rendering creates a photo realistic scene. To further accelerate the processing speed, the system is to be implemented on a parallel computer nCUBE 2 with a Silicon Graphics workstation as a host machine. An example is presented in this paper to demonstrate the capability of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The imaging mode 3 of the Modular Optoelectronic Multispectral Scanner (MOMS-02) combines two along-track panchromatic stereo bands (520 - 760 nm) with two multispectral bands in the red (645 - 680 nm) and near infrared (770 - 810 nm) wavelength spectrum. This special design offers the opportunity to combine the stereoscopic evaluation of relief parameters with thematic information on actual landuse from multispectral bands. Using FAO's Universal Soil Loss Equation (USLE) for a test site in the Ethiopian Highlands actual relief and thematic information is needed in order to derive an actual erosion risk map. MOMS-02 multispectral data was used as a data source for soil type mapping, the actual landcover and management intensity estimations. Potential natural vegetation cover and agroclimitical data were digitized from official large scale maps. Relief parameters like slope gradient and length were derived from a digital terrain model (DTM). DTM's were generated out of aerial photographs (1:50,000) and MOMS stereo data. They have been geocoded and validated by differential GPS measurements. Both DTM's were compared in order to show the potential of DTM's derived from MOMS. GIS based evaluations of the extracted parameters finally lead to an actual erosion risk map of the test site Dir Dira. Using landuse information from old topographic maps as an parameter for the USLE result in an historical erosion risk map. The comparison between both results shows a considerable increase of the endangered areas within 10 years.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.