One crucial ingredient for augmented reality application is having or obtaining information about the environment. In this paper, we examine the case of an augmented video application for vehicle-mounted cameras facing forward. In particular, we examine the method of obtaining geometry information of the environment via stereo computation / structure from motion. A detailed analysis of the geometry of the problem is provided, in particular of the singularity in front of the vehicle. For typical scenes, we compare monocular configurations with stereo configurations subject to the packaging constraints of forward-facing cameras in consumer vehicles.
KEYWORDS: 3D modeling, 3D image processing, Clouds, Photogrammetry, Data modeling, 3D image reconstruction, Visualization, Cultural heritage, Image processing, Laser scanners
A system designed and developed for the three-dimensional (3-D) reconstruction of cultural heritage (CH) assets is presented. Two basic approaches are presented. The first one, resulting in an “approximate” 3-D model, uses images retrieved in online multimedia collections; it employs a clustering-based technique to perform content-based filtering and eliminate outliers that significantly reduce the performance of 3-D reconstruction frameworks. The second one is based on input image data acquired through terrestrial laser scanning, as well as close range and airborne photogrammetry; it follows a sophisticated multistep strategy, which leads to a “precise” 3-D model. Furthermore, the concept of change history maps is proposed to address the computational limitations involved in four-dimensional (4-D) modeling, i.e., capturing 3-D models of a CH landmark or site at different time instances. The system also comprises a presentation viewer, which manages the display of the multifaceted CH content collected and created. The described methods have been successfully applied and evaluated in challenging real-world scenarios, including the 4-D reconstruction of the historic Market Square of the German city of Calw in the context of the 4-D-CH-World EU project.
In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.
Andreas Hadjiprocopis, Marinos Ioannides, Konrad Wenzel, Mathias Rothermel, Paul Johnsons, Dieter Fritsch, Anastasios Doulamis, Eftychios Protopapadakis, Georgia Kyriakaki, Kostas Makantasis, Guenther Weinlinger, Michael Klein, Dieter Fellner, Andre Stork, Pedro Santos
KEYWORDS: 3D modeling, Cameras, 3D image processing, 3D image reconstruction, Image retrieval, Image processing, Data modeling, Image filtering, Internet, Photography
One of the main characteristics of the Internet era we are living in, is the free and online availability of a huge amount of data. This data is of varied reliability and accuracy and exists in various forms and formats. Often, it is cross-referenced and linked to other data, forming a nexus of text, images, animation and audio enabled by hypertext and, recently, by the Web3.0 standard. Our main goal is to enable historians, architects, archaeolo- gists, urban planners and affiliated professionals to reconstruct views of historical monuments from thousands of images floating around the web. This paper aims to provide an update of our progress in designing and imple- menting a pipeline for searching, filtering and retrieving photographs from Open Access Image Repositories and social media sites and using these images to build accurate 3D models of archaeological monuments as well as enriching multimedia of cultural / archaeological interest with metadata and harvesting the end products to EU- ROPEANA. We provide details of how our implemented software searches and retrieves images of archaeological sites from Flickr and Picasa repositories as well as strategies on how to filter the results, on two levels; a) based on their built-in metadata including geo-location information and b) based on image processing and clustering techniques. We also describe our implementation of a Structure from Motion pipeline designed for producing 3D models using the large collection of 2D input images (>1000) retrieved from Internet Repositories.
KEYWORDS: 3D modeling, Visualization, Data modeling, Cultural heritage, Detection and tracking algorithms, Sensors, Photography, Solid modeling, Modeling, Cameras
One of the main characteristics of the Internet era we are living in, is the free and online availability of a huge amount of data. This data is of varied reliability and accuracy and exists in various forms and formats. Often, it is cross-referenced and linked to other data, forming a nexus of text, images, animation and audio enabled by hypertext and, recently, by the Web3.0 standard. Search engines can search text for keywords using algorithms of varied intelligence and with limited success. Searching images is a much more complex and computationally intensive task but some initial steps have already been made in this direction, mainly in face recognition. This paper aims to describe our proposed pipeline for integrating data available on Internet repositories and social media, such as photographs, animation and text to produce 3D models of archaeological monuments as well as enriching multimedia of cultural / archaeological interest with metadata and harvesting the end products to EUROPEANA. Our main goal is to enable historians, architects, archaeologists, urban planners and affiliated professionals to reconstruct views of historical monuments from thousands of images floating around the web.
KEYWORDS: 3D modeling, 3D image processing, Clouds, Microelectromechanical systems, Sensors, Data modeling, RGB color model, Photography, Buildings, Image segmentation
Within the paper, we present an approach for the alignment of point clouds collected by the RGB-D sensor Microsoft
Kinect, using a MEMS IMU and a coarse 3D model derived from a photographed evacuation plan. In this approach, the
alignment of the point clouds is based on the sensor pose, which is computed from the analysis of the user’s track,
normal vectors of the ground points, and the information extracted from the coarse 3D model. The user’s positions are
derived from a foot mounted MEMS IMU, based on zero velocity updates, and also the information extracted from a
coarse 3D model. We will then estimate the accuracy of point cloud alignment using this approach, and discuss about the
applications of this method in indoor modeling of buildings.
We address the problem of automating the processing of dense range data, specifically the automated interpretation of such data containing curved surfaces. This is a crucial step in the automated processing of range data for applications in object recognition, measurement, re-engineering and modeling. We propose a two stage process using model-based curvature classification as the first step. Features based on differential geometry, mainly curvature features, are ideally suited for processing objects of arbitrary shape including of course curved surfaces. The second stage uses a modified region growing algorithm to perform the final segmentation. The results of the proposed approach are demonstrated on different range data sets.
The optical MOMS02 three-line imaging camera provided stereo data in a resolution of 13.5 m and 4.5 m during the second German spacelab (D2) experiment on-board the space-shuttle within the flight STS55 from April 26 to May 6, 1993. For the verification of the stereo module, digital terrain models are derived by photogrammetric image matching techniques using intensity based and feature based methods, respectively. These verifications concentrate on two test sites, for which ground control points are available. The paper presents a comparison of the matching results when area based and feature based methods are used for automatic DTM generation. It is interesting, that the accuracy level could be increased by a factor of 2, if area based image matching is used for the point transfer computation. This improvement in accuracy is verified in both test sites: the Australian and the Andes test site. Furthermore, the last part of the paper presents first experimental results of a simulation for the matching of three channels of different pixel resolution, to come very close to the MOMS-02 architecture.
This paper presents a concept for the recognition and localization of objects which relies on multi-sensor fusion and active exploration. Today, researchers in photogrammetry generally agree that the use of complementary sensors, e.g. ranging and imaging cameras, is important for simplifying interpretation and related tasks. But no notice has been taken so far of the role of active exploration. Our work is part of a research program where five institutes of Stuttgart University co-operate to develop an experimental measuring system for flexible inspection and gauging. The system will be capable of determining automatically the shape, form and class attributes of an industrial object. It then solves in a self-acting manner the measuring task associated with that object. The paper, which focuses on the object recognition concept, briefly describes the experimental measuring system and the sensors employed. A number of subsequent processing steps of the whole procedure are illustrated through use of initial experimental results.
KEYWORDS: Image processing, Data modeling, Image resolution, 3D modeling, Multispectral imaging, Finite element methods, Cameras, Reliability, 3D metrology, Remote sensing
The Digital Terrain Model (DTM) is one of the most elementary and important products of processing stereo image data. The 3D shape of the imaged scene is the basis for topographic mapping, orthophoto generation, geocoding multispectral image data, etc. An automatic procedure for generating DTM's from three-line imagery is under development since several years. The main features of the algorithm are feature based matching of points and edges extracted in all three channels, consistency checks in image and object space using the known orientation of the image strips, finite element modelling for surface representation and a coarse-to-fine directed processing strategy which controls the overall processing steps. As an option intensity based least squares matching is added if a most precise DTM is required. In this paper the procedure is described in detail. Processing scenes of Andes orbit 115 and Australia orbit 75B of MOMs-02/D2 mission shows that the procedure is successful even in mountainous terrain as well as in low texture scenes. Matching the three pan-chromatic stereo channels is done fast and reliable. The experimentally found height accuracy of 3D points determined by error propagation is about 10 - 15 m. This accuracy level is confirmed for the Australia scene by independent check measurements using DGPS.
The paper introduces in airborne laser scanner architecture and its data postprocessing. One main problem is the definition of a contiguously measured digital terrain model (DTM) by filtering neighboring strips and their attachment to each other. These problems are solved by spline approximations and datum transforms. The spline approximation starts with a bicubic polynomial which can be reparameterized in terms of its function values and the first derivatives as new unknown parameters. Filtering is carried out in a two dimensional rectangle bordered by the nodes of the spline. The next step of the data postprocessing is the datum transform. Using a similarity transform the seven datum parameters have to be defined by a data snooping procedure with non-parametric hypothesis tests. The reason for using non- standard test statistics is the systematic effects produced by the sensor system itself are man- made and natural 3-D phenomena cannot be eliminated a priori perfectly. Therefore, the datum transform should give the hints which observations are blunders and which are not.
The integration of digital terrain models (DTM) in geographic information systems (GIS) implies automatically an extension of the GIS reference surface and its query space. It is trivial that a DTM is the natural boundary representation of the earth's surface. Man-made objects, for instance homes, streets, bridges, dams should be considered in a second step because these objects cannot be represented well by boundary surfaces. The link of these objects to DTM can be realized by keys and pointers. Therefore, an efficient DTM integration in GIS is the first task to be solved. The paper introduces DTM data structures represented by NIAM diagrams. Using the entity-relationship model these diagrams are very capable to describe the power of relations. Next a 3-D query space is defined keeping in mind 3-D coordinates and 2- D topological elements. Based on this query space spatial operators are derived which fit in standard SQL vocabulary. The implementation part of the paper uses the exodus storage manger to map the DTM of the Federal State Baden-Wurrtemberg in a spatial database system.
Working Group III/3 of ISPRS, entitled `Semantic Modelling and Image Understanding,' has organized a test on image understanding. The basic goal of this test is the integration of different information sources in the image interpretation process. In addition to aerial images all sorts of additional information can be used to derive a more reliable result, e.g. color images, stereo, GIS-planimetry. The test deals with man-made objects, for instance houses, streets, or fields of land, which are to be detected and reconstructed from the image data and further information.
The determination of absolute orientation of image pairs is a central task in aerial photogrammetry. The usual way in photogrammetry to determine the absolute orientation is using control points. Acquisition of control points (i.e. the signalization of control points and the preservations over a longer period) is a very time and cost intensive task. Using new sensors as combined systems, the effort for the acquisition of control points to determine the absolute orientation could be minimized or even avoided. Beside the airborne camera, GPS, two laser profilers and an inertial system are used as additional sensors. The laser profilers are looking sidewards, so that during the stripewise flight over the photogrammetric block laser profiles at the upper and lower border of the strip are recorded. GPS- and INS-Data are registered during the whole time of the flight. Using this method there is following additional information for each image pair: (1) GPS determined coordinates of the projection centers of each image, (2) INS determined attitude angles for the absolute orientation of the laser profile, and (3) GPS and INS supported (i.e. absolute oriented) laser profiles at the upper and lower border of each image pair. Using an iterating method utilizing the additional laser profile and GPS information, the absolute orientation of an image pair could be performed without using additional control points.
The optimum determination of the position of projective centres in industrial photogrammetry can be solved twofold: On the one hand simulations can contribute considerably to solve the problem, and on the other hand analytic formulations might be used. For reasons of powerful network design the latter ones are preferred because here different objective functions can be integrated in one solution strategy only. The paper reviews the state-of-the-art in analytic first order design. It introduces criterion matrices for the coordinates, which are the 'observations' for solving the design matrices. The algorithm used during optimization is the Lemke algorithm. That means, the optimization is performed by quadratic programming being in correspondence with linear complementarity problems. Besides the objective functions for coordinates further boundary conditions have to be considered such as constraints in accuracy, position and costs.
The Working Group V/3 of the International Society for Photogrammetry and Remote Sensing (ISPRS) was founded in 1988 at the 16th ISPRS International Congress. Since that time it works on " Image Analysis and Image Synthesis in Close-Range Photogrammetry ". The following report deals with concepts and topics which should deeply be investigated during the second half of the research period.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.