30 March 2022 MangoGAN: a general adversarial network-based deep learning architecture for mango tree crown detection
Ramesh Kestur, Anjali Kulkarni, Rahul Bhaskar, Prajwal Sreenivasa, Dasari Dhanya Sri, Anubhaw Choudhary, Baluvaneralu V. Balaji Prabhu, Gautham Anand, Omkar Narasipura
Author Affiliations +
Abstract

We present MangoGAN, a general adversarial network (GAN)-based deep learning semantic segmentation model for the detection of mango tree crowns in remotely sensed aerial images. The aerial images are acquired by low-altitude remote sensing carried out using a quadrotor unmanned aerial vehicle in a mango orchard. Aerial images are acquired with a vision spectrum optical sensor, also popularly known as RGB images as the payload. MangoGAN is trained on 1430 images patches of size 240  ×  240  pixels. The testing was carried out on 160 images. Results are analyzed using the precision, recall, F1 parameters derived from contingency matrix and by visualization using Gradcam method. The performance of the MangoGAN is compared with peer architectures trained on the same data. MangoGAN outperforms its peer architectures

© 2022 Society of Photo-Optical Instrumentation Engineers (SPIE) 1931-3195/2022/$28.00 © 2022 SPIE
Ramesh Kestur, Anjali Kulkarni, Rahul Bhaskar, Prajwal Sreenivasa, Dasari Dhanya Sri, Anubhaw Choudhary, Baluvaneralu V. Balaji Prabhu, Gautham Anand, and Omkar Narasipura "MangoGAN: a general adversarial network-based deep learning architecture for mango tree crown detection," Journal of Applied Remote Sensing 16(1), 014527 (30 March 2022). https://doi.org/10.1117/1.JRS.16.014527
Received: 25 June 2021; Accepted: 4 March 2022; Published: 30 March 2022
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

Network architectures

Convolution

RGB color model

Unmanned aerial vehicles

Performance modeling

Remote sensing

Back to Top