This work extends earlier work on the real-time photogrammetric stitching of staring arrays of high resolution
videos on commercial off the shelf hardware. The blending is both further optimised for Graphics Processor
Unit (GPU) implementation and extended from one to two dimensions to allow for multiple layers or arbitrary
arrangements of cameras. The incorporation of stabilisation inputs allows the stitching algorithm to provide space
stabilised panoramas. The final contribution is to decrease the sensitivity to depth of the stitching procedure,
especially for wide aperture baselines. Finally timing tests and some resultant stitched panoramas are presented
and discussed.
Inverse lens distortion modelling allows one to find the pixel in a distorted image which corresponds to a known
point in object space, such as may be produced by RADAR. This paper extends recent work using neural networks
as a compromise between processing complexity, memory usage and accuracy. The already encouraging results
are further enhanced by considering different neuron activation functions, architectures, scaling methodologies
and training techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.