Presentation
27 April 2020 pyTAG: python-based interactive training data generation for visual tracking algorithms (Conference Presentation)
Ekincan Ufuktepe, Vipul Ramtekkar, Ke Gao, Noor Al-Shakarji, Joshua Fraser, Hadi AliAkbarpour, Guna Seetharaman, Kannappan Palaniappan
Author Affiliations +
Abstract
In this study, a rapid training data and ground truth generation tool has been implemented for visual tracking. The proposed tool's plugin structure allows integration, testing, and validation of different trackers. The tracker can be paused, resumed, forwarded, rewound and re-initialized on the run, after it loses the object, which is a needed step in the training data generation. This tool has been implemented to assist researchers to rapidly generate ground truth and training data, fix annotations, run and visualize their own single object trackers, or existing object tracking techniques.
Conference Presentation
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Ekincan Ufuktepe, Vipul Ramtekkar, Ke Gao, Noor Al-Shakarji, Joshua Fraser, Hadi AliAkbarpour, Guna Seetharaman, and Kannappan Palaniappan "pyTAG: python-based interactive training data generation for visual tracking algorithms (Conference Presentation)", Proc. SPIE 11398, Geospatial Informatics X, 113980D (27 April 2020); https://doi.org/10.1117/12.2561718
Advertisement
Advertisement
KEYWORDS
Optical tracking

Detection and tracking algorithms

Computer vision technology

Machine learning

Machine vision

Motion models

Switches

RELATED CONTENT

Siamese convolutional networks for tracking the spine motion
Proceedings of SPIE (September 19 2017)
Model-based vision for car following
Proceedings of SPIE (August 20 1993)
Relevance feedback-based building recognition
Proceedings of SPIE (July 14 2010)

Back to Top