Paper
14 February 1992 Vision-based method for autonomous landing
Huseyin Hakan Yakali, Daniel Raviv
Author Affiliations +
Proceedings Volume 1613, Mobile Robots VI; (1992) https://doi.org/10.1117/12.135197
Event: Robotics '91, 1991, Boston, MA, United States
Abstract
This paper presents eight visual cues to be used as part of a control loop for autonomous landing. The idea is based on fixating the camera at the vanishing point of the projection of the runway in the image. Two-dimensional geometrical cues are used to derive the visual cues. The visual cues that are extracted are the relative location of the camera to the runway in terms of the runway width, orientation of the camera, and relevant angles such as glide slope angle.
© (1992) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Huseyin Hakan Yakali and Daniel Raviv "Vision-based method for autonomous landing", Proc. SPIE 1613, Mobile Robots VI, (14 February 1992); https://doi.org/10.1117/12.135197
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Cameras

Visualization

Coded apertures

Mobile robots

Radar

Imaging systems

Signal detection

Back to Top