Automated and cost-effective phenotyping pipelines are needed to efficiently characterize new lines and hybrids developed in plant breeding programs. In this study, we employ deep neural networks (DNNs) to model individual maize plants using 3D point cloud data derived from unmanned aerial systems (UAS) imagery by PointNet network. The experimental setup was performed at the Indiana Corn and Soybean Innovation Center at the Agronomy Center for Research and Education (ACRE) in West Lafayette, Indiana, USA. On June 17th, 2020 a flight was carried out over maize trials using a custom designed UAS platform with a Sony Alpha ILCE-7R photogrammetric sensor. RGB images were processed by a standard photogrammetric pipeline by Structure from Motion (SfM) to reconstruct the study field into a final scaled 3D point cloud. 50 individual maize plants were manually segmented from the point cloud to train the DNN and subsequently individual plants were extracted over a test trial with more than 5,000 plants. Moreover, to reduce overfitting in the fully-connected layers, we employed data augmentation not only in translation, but also in color intensity. Results show a successful rate for the extraction of the individual plants of 72.4%. Our test trial demonstrates the possibility of using deep learning to overcome the individual maize extraction challenge on the basis of UAS data.
Precise and functional phenotyping is a limiting factor for crop genetic improvement. However, because of its ease of application, imagery-based phenomics represents the next breakthrough for improving the rates of genetic gains in field crops. Currently, crop breeders lack the know-how and computational tools to include such traits in breeding pipelines. A fully automatic user-friendly data management together with a more powerful and accurate interpretation of results should increase the use of field high throughput phenotyping platforms (HTPPs) and, therefore, increasing the efficiency of crop genetic improvement to meet the needs of future generations. The aim of this study is to generate a methodology to high throughput phenotyping based on temporal multispectral imagery (MSI) collected from Unmanned Aerial Systems (UAS) in soybean crops. In this context, ‘Triple S’ (Statistical computing of Segmented Soybean multispectral imagery) is developed as an open-source software tool to statistically analyze the pixel values of soybean end-member and to compute canopy cover area, number and length of soybean rows from georeferenced multispectral images. During the growing season of 2017, a soybean experiment was carried out at the Agronomy Center for Research and Education (ACRE) in West-Lafayette (Indiana, USA). Periodic images were acquired by Parrot Sequoia Multispectral sensor on board senseFly eBee. The results confirm the feasibility of the proposed methodology, providing scalability to a comprehensive analysis of crop extension and affording a constant operational improvement and proactive management.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.