In this paper we present a method to visualize geo-referenced objects on modern smartphones using a multi-
functional application design. The application applies different localization and visualization methods including
the smartphone camera image. The presented application copes well with different scenarios. A generic application work
flow and augmented reality visualization techniques are described. The feasibility of the approach is
experimentally validated using an online desktop selection application in a network with a modern of-the-shelf
smartphone. Applications are widespread and include for instance crisis and disaster management or military
applications.
Recognizing the location where an image was taken, solely based on visual content, is an important problem in computer vision, robotics and remote sensing. This paper evaluates the performance of standard approaches for location recognition when applied to large-scale aerial imagery in both electro-optical (EO) and infrared (IR) domains. We present guidelines towards optimizing the performance and explore how well a standard location recognition system is suited to handle IR data. We show on three datasets that the performance of the system strongly increases if SIFT descriptors computed on Hessian-Affine regions are used instead of SURF features. Applications are widespread and include vision-based navigation, precise object geo-referencing or mapping.
Low cost depth sensors have been a huge success in the field of computer vision and robotics, providing depth
images even in untextured environments. The same characteristic applies to the Kinect V2, a time-of-flight
camera with high lateral resolution. In order to assess advantages of the new sensor over its predecessor for
standard applications, we provide an analysis of measurement noise, accuracy and other error sources with the
Kinect V2. We examined the raw sensor data by using an open source driver. Further insights on the sensor
design and examples of processing techniques are given to completely exploit the unrestricted access to the
device.
The wide availability of previously acquired, geo-referenced imagery enables automatic video based solutions for high precision object geo-localization and cooperative visualization. We present a system which geo-references objects seen in UAV video streams, distributes this information in a sensor network and visualizes them on modern smartphones using augmented reality techniques. The feasibility of the approach was experimentally validated using Mini-UAV ("MD-400") and high altitude UAV video footage in combination with modern off-the-shelve smartphones. Applications are widespread and include for instance crisis and disaster management or military applications.
Region-based active contours are a variational framework for image segmentation. It involves estimating the
probability distributions of observed features within each image region. Subsequently, these so-called region
descriptors are used to generate forces to move the contour toward real image boundaries. In this paper region
descriptors are computed from samples within windows centered on contour pixels and they are named local
region descriptors (LRDs). With these descriptors we introduce an equation for contour motion with two terms:
growing and competing. This equation yields a novel type of AC that can adjust the behavior of contour pieces to
image patches and to the presence of other contours. The quality of the proposed motion model is demonstrated
on complex images.
In this paper we present a novel fast method for the non-rigid registration of a few X-ray projections with
CT data. The method involves non-parametric non-rigid registration techniques for the difficult 2D-3D case,
combined with knowledge of probable deformations modeled as active shape models (ASMs). ASMs allow us
to cope with as few as two projections by regularizing the registration process. The model is learned from
deformations observed during respiration in a 4D-CT. This method can be applied in motion compensated
radiation therapy to eliminate the need for fiducial implantation. We designed a fast C++ implementation for
our method in order to make it practicable. Our tests on real 4D-CT data achieved registration times of 2-4
minutes using a desktop PC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.