Open Access
17 June 2024 Detection of wintertime green vegetated cover using object-based classification with open-source remote sensing and geospatial technologies
Jae Sung Kim, Sara P. Syswerda, Sigrid D. P. Smith
Author Affiliations +
Abstract

It is important to detect wintertime green vegetated cover (WGVC), since it includes cover crop information, which is one of the most important agricultural best management practices used today. Related to this, cover crop area, which is part of WGVC, has been estimated using survey methods traditionally, but remote sensing can be used as a more time- and cost-effective assessment. Previously developed pixel-based methods to assess cover crops using remote sensing can have a salt-and-pepper effect, which lowers classification accuracy. Therefore, object-based classification was applied to estimate the spatial distribution of WGVC across the entire state of Delaware. To reduce the financial burdens of fee-based software products, the workflow was formalized only with open-source remote sensing software and publicly available imagery. WGVC in this study was defined as any vegetation planted or surviving during winter on field crop areas. Obviously, the WGVC area estimated in this study was far more extensive than the surveyed area of conventionally defined cover crops, which had a narrower definition. Applying this methodology across Delaware, the total WGVC was estimated to be 137,297 ha between 12/26/2021 and 04/30/2022. The classification accuracy of each date was evaluated using samples collected from pan-sharpened Landsat 8 and 9 images, and the accuracies were higher than 85%. Kappa statistics were above 74% in all cases. The workflow in this study may improve time, labor, and cost efficiency in other areas.

1.

Introduction

Cover crops are an essential agricultural best management practice benefiting growers. Cover crops are becoming more widely used by farmers, improving the sustainability of agriculture in the US.1 Cover crops can contribute to the reduction of soil erosion, phosphorus loss, and groundwater nitrate leachate risk during winter.25 Also, cover crops can protect perennial crop seedlings during establishment,6,7 add organic matter, and enhance soil aggregation.7 In addition, leguminous cover crops can transform atmospheric nitrogen into biomass, which mineralizes in soil for the following grain crop to use.7 However, understanding the adoption and impacts of cover crops is hindered by the difficulty of accurately characterizing the spatial distribution of cover crops.

According to Ref. 8, cover crops are generally defined as grasses, legumes, and forbs planted for seasonal cover and associated benefits. These crops are meant to cover and enrich the soil instead of being harvested. The most common types of cover crops are rye and winter wheat.8 For example, cereal rye is often planted as a winter cover crop in the fall between cash crops (e.g., corn and soybeans).8 However, we focus on wintertime green vegetated cover (WGVC), which is overall vegetation cover on croplands during wintertime. The reason is that they function similarly to cover crop regardless of the purpose or precise timing of planting even though some of WGVC is not conventionally defined as cover crop. Any vegetation during wintertime is expected to have the ecological functionality of cover crops to a certain degree (such as reducing soil erosion), and perennial crops have been included in types of cover crop in some past studies (e.g., Refs. 911). Consequently, WGVC identified in this study may include perennial crops (e.g., alfalfa), field crops planted in fall or early spring (before May), and green weeds in addition to conventional cover crops. Therefore, we expected to have a larger area of WGVC compared to the surveyed area of cover crops of traditional definition.

Currently, the usage of cover crops is examined by windshield surveys5,12 or by questionnaire surveys, which are incomplete, often biased, and require significant time, labor, and cost.13 Therefore, remote sensing technologies are known to be promising approaches for the assessment of cover crops especially in large scale cropland as noted in Ref. 13. A pixel-based approach has been applied in many cases (e.g., Refs. 5 and 1315) to estimate cover crop area in agricultural remote sensing studies. Pixel-based approaches were used with agricultural and vegetation indices, such as a combination of normalized difference vegetation index (NDVI), normalized difference tillage index (NDTI), and normalized difference residue index (NDRI),5 or a combination of NDVI, difference vegetation index (DVI), normalized green red difference index (NGRDI), and ratio vegetation index (RVI).13 However, it is known that the pixel-based approach has a salt-and-pepper effect,1619 which can cause problems when converting classified pixels to polygons. In Ref. 13, it was noted that the salt-and-pepper effect was observed in pixel-based classification for the assessment of cover crops. It is obvious that this salt-and-pepper effect will occur in WGVC as well. Therefore, another remote sensing approach other than the pixel-based method was sought to overcome the salt-and-pepper effect when assessing WGVC. Object-based classification was applied in this study to detect WGVC from satellite imagery while solving the salt and pepper effect.

In many cases, object-based classification is applied using commercial fee-based software. However, an object-based classification workflow using commercial software financially burdens many environmental, agricultural, or educational organizations because of the high cost of software licenses. Therefore, the large-scale area WGVC detection workflow proposed in this study was implemented using open-source remote sensing and geographic information system (GIS) technologies. This study is feasible because of the growing maturity of open-source geospatial technologies. Related to this study, there has been an unmanned aerial vehicle-based object-based image analysis (OBIA) of cover crop detection in vineyards20 using eCognition Developer 9.221 recently. However, a workflow for large-scale (e.g., state-level) WGVC identification by supervised object-based classification using open-source technologies was not found in the literature.

For large-scale (e.g., state-level) cropland WGVC detection, it is ideal to use medium resolution imagery. Processing a high-resolution image for a large area will require large computing resources, and low-resolution images will give inaccurate results for WGVC delineation. Sentinel-2 constellation images have 10 m resolution with 5-day temporal resolution. Also, these images are free of charge to public, scientific, and commercial users.22 Therefore, Sentinel-2 images were used in this study because they are mid-resolution satellite images with near real-time accessibility.

The purpose of this study is to develop a workflow for delineating large-scale WGVC with OBIA and open-source geospatial technologies using Delaware, United States as our area of study. Delaware is a good location because it allows us to process and analyze data for an entire small state with significant agricultural area. Hence, this study will fill the absence of OBIA studies for large scale identification of WGVC using open-source geospatial technologies.

2.

Materials and Methods

The selected area of interest is Delaware, United States. For this area of interest, Copernicus Sentinel-223,24 (specifications in Table 1) images were acquired from US Geological Survey (USGS) EarthExplorer.25 Currently, the Sentinel-2 data are accessible in Copernicus Browser24 but not in USGS EarthExplorer any longer. The revisit time of 5 days was sufficient to find cloudless images in reasonable temporal intervals.

Table 1

Description of Sentinel-2 constellation.23,24

AttributeDescription
Number of satellites2
Orbit altitude786 km in a Sun-synchronous orbit
Orbital swath width290 km
Orbit inclination98.62 deg
Sensor typeMultispectral instrument (MSI) - Pushbroom
Number spectral bands13
GSD10 m, 20 m, 60 m
Visible and NIR band GSD10 m
Revisit time10 days (one satellite), 5 days (two satellites)
Geographical coverage56° south – 83° north

Because WGVC can change during a season, multiple image dates were needed to represent the entire wintertime. The selected images were from December of the harvest year, and from February and April of the following year. Since three tiles are required to cover Delaware, virtual raster images were built using the raster program gdalbuildvrt26 using tiles in Table 2 and clipped in “.tif” format for the state of Delaware.

Table 2

Description of image data.

AttributeDescription
Dates12/26/2021, 2/9/2022, 4/30/2022
Tile index18TVK, 18SVJ, 18SVH
Bands usedB8 (NIR, λ=842  nm), B4 (red, λ=665  nm), B3 (green, λ=560  nm)
GSD10 m

These false color (NIR, R, and G bands) images (Fig. 1 displayed with cumulative count cut with minimum of 2% and maximum of 98%) were segmented using large-scale mean-shift (LSMS)27 segmentation with Orfeo ToolBox (OTB).28,29 The mean shift algorithm30 has advantages in its use of a hierarchical relationship between segmentation levels, unlike the scale invariance of the watershed algorithm.31 Also, it does not require prior knowledge about cluster numbers and shape constraints.32 In addition, more complex shapes can be modeled using the mean shift algorithm compared to K-means.33 LSMS was chosen as the segmentation algorithm because the image data in this study had 10 m GSD and covered the entire state of Delaware, which was quite large to be segmented using the traditional mean shift option in OTB. Table 3 shows the parameters used for LSMS in this study.

Fig. 1

Sentinel-2 false color images (NIR, R, and G) of 12/26/2021, 2/9/2022, and 4/30/2022 (from left to right), shown using cumulative count cut with a minimum (2%) and maximum (98%).

JARS_18_2_024515_f001.png

Table 3

Parameters for large scale mean shift segmentation.

ParameterSpatial radius (spatialr)Range radius (ranger)Minimum size (minsize)X tile size (tilesizex)Y tile size (tilesizey)
Value (px)51540010001000

Each spatial and range radius in Table 3 represents the thresholds of spatial and spectral signature Euclidean distance in evaluating pixels in an object.34 The minimum size for segmentation of an object was set to 400 pixels, which is equivalent to 9.88 acres. This value was determined after finding that the size of most individual fields was larger than 400 pixels after visual examination of the fields in Sentinel-2 images. Also, any isolated crop field less than 400 pixels would be defined by later refinement—extraction of cropland area using 2021 National Agricultural Statistics Service (NASS) cropland data layer (CDL)35 boundary.

To restrict the area of interest to field crop areas, only the segmentations on the field crops were extracted by clipping segmented objects by CDL cropland areas. Beyond a small-scale area (e.g., farm level), it is hard to identify field crops physically for the entire state. Therefore, 2021 NASS CDL for Delaware was used to identify field crops area, as used for cover crop identification by Hively et al.5,36 in a similar way. Because CDL includes land covers other than crops, only field crop pixels were extracted using gdal_calc.py. Table 4 shows the field crops and their area calculated using QGIS37 raster layer unique values report tool. The crop name for each pixel value could be found in Delaware CDL Metadata.38 The total area of field crops was calculated as 183,754 ha (454,057 acres). The most significant field crops were corn and soybeans, whose sum represented about 81% of the entire field crop area.

Table 4

Candidate summer crop area from 2021 Delaware CDL for WGVC. (Abbreviations: Dbl = double, WinWht = winter wheat.)

CropsArea (ha)CropsArea (ha)CropsArea (ha)
Corn84,327.12Potatoes372.96Cantaloupes33.03
Sorghum813.06Other crops13.32Peppers106.20
Soybeans63,639.72Sweet potatoes46.71Greens13.05
Sunflower1.35Misc Vegs and fruits1.44Strawberries1.98
Sweet corn3,219.21Watermelons1,266.30Squash27.00
Barley106.20Onions0.18Dbl crop WinWht/corn97.47
Winter wheat585.63Cucumbers82.44Dbl crop triticale/corn19.80
Dbl crop WinWht/soybeans12,578.22Peas87.75Pumpkins221.49
Rye165.96Tomatoes21.33Dbl crop WinWht/sorghum10.35
Oats17.19Herbs0.45Dbl crop barley/corn54.63
Millet3.33Clover/wildflowers5.40Dbl crop soybeans/oats1.62
Alfalfa603.99Sod/grass seed2,310.12Dbl crop corn/soybeans0.18
Other hay/non-alfalfa8,114.67Switchgrass0.18Cabbage151.20
Buckwheat0.18Triticale0.90Eggplants1.62
Sugar beets0.36Carrots0.27Gourds0.18
Dry beans1,720.53Asparagus1.17Dbl crop barley/soybeans2,903.49

However, the clipping by CDL modified the segmentation object polygons and even created multi-part polygons in some locations after clipping out central areas of polygons. Therefore, the attribute of each object had to be recalculated to reflect these changes. First, possible multipart polygons were separated by using Multipart to Single part tool in QGIS, and the objects’ attributes (mean and variance) were calculated with the Zonal Statistics tool in QGIS for each band. In Ref. 39, texture was measured in terms of homogeneity, contrast, dissimilarity, entropy, standard deviation, correlation, angular second moment, and mean. Therefore, using mean and variance addresses not only spectral characteristics but also textures of objects to some degree.

After training samples, the field crop segmentations were classified using normal Bayes (NB)40 supervised classification in OTB. In Ref. 41, NB and support vector machine (SVM) showed better performance than the classification and regression tree (CART) and K nearest neighbor (KNN). Although NB needs a higher number of samples, SVM requires complex tuning parameters. Therefore, NB has a clear advantage in this study compared to SVM, CART, and KNN. For sampling purposes, both false and true color images, which were displayed with cumulative count cut with minimum (2%) and maximum (98%), were used as ground truth. Using best judgment, the operator could identify most vegetation cover on the field crop area as red and green color tones in each false and true color image for training. Figure 2 shows examples of the sampling of WGVC and non-WGVC on the clipped segmentations for training. The sampling points for each class were marked inside each sample object by the operator. The properties of each object were acquired for each sample point by join attributes by location with intersects geometric predicate in QGIS.

Fig. 2

Sampling example of WGVC (green dot) and non-WGVC (red dot).

JARS_18_2_024515_f002.png

It was noted that the number of training samples should be between 10 and 100 times the number of bands in practice.42 Therefore, the number of samples, which gave satisfactory classification results later in each case, were larger than 30 (Table 5) in every case. This operation was implemented for each false color image for each of the three dates, and they were combined by Union operation in QGIS to represent the WGVC between the winter 2021 and the spring of 2022.

Table 5

The number of training samples required for classification results to become satisfactory for each image date.

Date of image12/26/20212/9/20224/30/2022
Cover crop425030
Non-cover crop476430

The average NDVI values of WGVC and non-WGVC objects for each date were compared. After creating NDVI raster, the mean NDVI for each sample object was calculated by zonal statistics in QGIS.37 In Fig. 3, WGVC samples had higher NDVI values than non-WGVC samples. However, the minimum NDVI of WGVC and maximum NDVI of non-WGVC overlapped to some degree for 12/26/21 and 02/09/22. The NB classifier is expected to address this confusion problem by computing probability of membership to each class compared to alternative methods relying on NDVI thresholds.

Fig. 3

Boxplot of NDVI of WGVC and non-WGVC samples from (a) 12/26/21, (b) 02/09/22, and (c) 04/30/22. Boxes show interquartile range with solid lines for medians and dotted lines for means, and whiskers show minima and maxima (or upper fences when a point represents a maximum).

JARS_18_2_024515_f003.png

Figure 4 shows the main workflow described above.

Fig. 4

WGVC area estimation workflow.

JARS_18_2_024515_f004.png

For the accuracy assessment, a different dataset other than Sentinel-2 imagery was sought. The criteria of the data for accuracy assessment were (1) the data should be publicly available geospatial data; (2) the data should show land cover on the dates near those of Sentinel-2 images. Landsat 8 and 9 images satisfied the criteria, since they are publicly available georeferenced images in USGS EarthExplorer, and Landsat has temporal resolution frequent enough to have similar image capture dates with Sentinel-2 images. Using Landsat 8 and 9 images, false color images were composited with NIR, R, and G bands, and true color images were composited with R, G, and B bands, as shown in Table 6. Both false and true color images were used to assess the accuracy of the classification. However, the GSD of NIR, R, G, and B bands of Landsat 8 and 9 were quite large (30 m). Therefore, pan-sharpened images (GSD=15  m) were created for both false and true color composite images using panchromatic images (GSD=15  m) with Pansharpening tool43 in the GDAL plugin of QGIS. Figure 5 shows the images used for the accuracy assessment. The time difference between Landsat and Sentinel data was 10 days maximum. This time gap was allowed because some of the Landsat 8 and 9 imagery taken near Sentinel-2 data acquisition dates had severe amounts of clouds. Fifty samples with an area larger than or equal to 10,000 m2 were collected randomly from each class of classification results using QGIS random extract within subsets tool for date 12/26/2021 and 2/9/2022 results. The classification results of those objects were compared with pan-sharpened false color and true color Landsat images (12/26/2021, 2/9/2022 each) by the operator.

Table 6

Landsat data used for accuracy assessment.

ImagePlatformBand used (GSD)Acquisition dateCompared Sentinel-2 dates
1Landsat 8NIR-band 5 (30 m), R-band 4 (30 m), G-band 3 (30 m), B-band 2 (30 m), PAN-band 8 (15 m)12/16/202112/26/2021
2Landsat 92/10/20222/9/2022
3Landsat 85/9/20224/30/2022

Fig. 5

(a) False color Landsat 8 (12/16/21), (b) false color Landsat 9 (2/10/22), (c) false color Landsat 8 (5/9/22), (d) true color Landsat 8 (12/16/21), (e) true color Landsat 9 (2/10/22), and (f) true color Landsat 8 (5/9/22).

JARS_18_2_024515_f005.png

However, the 5/9/2022 Landsat 8 image still had thick cloud cover in the southeastern part of DE. Therefore, 80 initial sample polygons larger than 10,000 m2 were randomly chosen from 4/30/2022 classification results table, and 50 samples, which were not affected by the cloud cover in the order of rows were used for the accuracy assessment by comparing with pan-sharpened false and true color Landsat image of 5/9/2022. The area threshold (10,000 m2) was imposed to include only larger polygons for accuracy assessment. If the sample object was composed of WGVC and non-WGVC pixels, the object was identified as the class of the majority of pixels. Since the samples for the accuracy assessment were segmented objects, the classification accuracy was calculated considering the area of each sample object using Eq. (1) according to Ref. 44 (as cited in Ref. 45):

Eq. (1)

π^=i=1ncisii=1nsi,
where π^ is the overall accuracy, ci is either 1 for correct classification or 0 for incorrect classification, n is the number of validation units, and si is the area of single sample unit i.

Also, the Kappa statistic was calculated with the area of objects instead of the number of pixels in the following equation:42

Eq. (2)

k^=Ni=1rxiii=1r(xi+·x+i)N2i=1r(xi+·x+i),
where r=the number of rows in the error matrix, xii=area in rowi  and columni, xi+=total area in rowi (shown as marginal total to right of the matrix), x+i=total of obervations in columni (shown as marginal total at bottom of the matrix), and N=total number of observations included in matrix.

The strength of the Kappa statistics was evaluated for agreement as suggested in Ref. 46, with values of 0.41 to 0.60 considered moderate, 0.61 to 0.80 considered substantial, and 0.81 to 1.00 considered almost perfect agreement. Finally, the areas of WGVC for each date and the area merged into polygons by Union, which is a QGIS vector processing tool, were calculated.

3.

Results and Discussion

OTB provided the results of training of samples in the form of a confusion matrix (Table 7) for WGVC and non-WGVC. The training accuracy of every image was high.

Table 7

Confusion matrix for training sampling (column is reference label, row is produced label).

WGVCNon-WGVCAccuracy (%)
12/26/2021
WGVC42295.5
Non-WGVC045100
Accuracy (%)10095.797.8
2/9/2022
WGVC50394.3
Non-WGVC061100
Accuracy (%)10095.397.4
4/30/2022
WGVC300100
Non-WGVC030100
Accuracy (%)100100100

WGVC and non-WGVC of Delaware were classified as shown in Figs. 6(a)6(c) for 12/26/2021, 02/09/2022, and 04/30/2022 using an NB classifier with trained models for the extracted objects for field crop area. Also, visual inspection suggested WGVC and non-WGVC usually was classified properly (example zoomed area west of Dover, Delaware, United States in Fig. 7). In Fig. 6, there was no substantial WGVC in the northern part of DE, because this part is mostly urban area (e.g., the cities of Newark and Wilmington). Also, the Delaware Bay area (mostly wetlands), Dover (urban), and Redden State Forest (forest) did not have a substantial presence of crops. The polygons merged by union for the above three dates showed WGVC identification spread across most of the state [Fig. 6(d)]. The accuracy of classification was evaluated against pan-sharpened Landsat 8, 9 false/true color images by visual inspection as shown in the confusion matrices (Table 8).

Fig. 6

WGVC classification: (a) classification - 12/26/2021, (b) classification - 02/09/2022, (c) classification - 04/30//2022, and (d) WGCV merged by Union.

JARS_18_2_024515_f006.png

Fig. 7

False color image (a) and classification (b) for an example location west of Dover, Delaware, United States.

JARS_18_2_024515_f007.png

Table 8

Confusion matrices and Kappa statistics of three classification result.

Reference data
WGVC (ha)Non-WGVC (ha)Row total (ha)User’s accuracy (%)
Image 1 – 12/26/2021
Classification dataWGVC (ha)220.0031.09251.0987.62
Non-WGVC (ha)31.00220.40251.3987.67
Column total (ha)251.00251.49502.49
Producer’s accuracy (%)87.6587.64Overall accuracy:
87.64%
Khat: 75.29%
Image 2 – 02/09/2022
Classification dataWGVC (ha)272.1015.84287.9494.50
Non WGVC (ha)21.68232.70254.3791.48
Column total (ha)293.78248.54542.31
Producer’s accuracy (%)92.6293.63Overall accuracy:
93.08%
Khat: 86.09%
Image 3 – 04/30/2022
Classification dataWGVC (ha)263.1136.21299.3287.90
Non-WGVC (ha)36.75244.55281.2986.94
Column total (ha)299.85280.76580.61
Producer’s accuracy (%)87.7587.10Overall accuracy:
87.43%
Khat: 74.84%

In confusion matrices, user’s, producer’s, and overall accuracies in all dates were higher than 85% (Table 8). Therefore, the classification results were entirely satisfactory. Also, kappa statistics (KHAT), which is a measure of the difference between the actual agreement and chance agreement,42 was higher than 74% in all cases. These kappa statistics were quite satisfactory, as well. Total WGVC area was found to be about 137,297 ha (Table 9).

Table 9

The total area of WGVC class union.

DateArea (ha)Area (acres)
12/26/202195,327235,552
02/09/202279,098195,452
04/30/202290,620223,922
Total (union)137,297339,262

It was found that a multi-date WGVC identification strategy was useful because the total area estimated by Union was higher than found using images of individual dates. The total area of field crops from Delaware CDL 2021 (Table 4) was 183,754 ha. Therefore, about three quarters (75%) of the field crop areas were covered by WGVC between 12/26/2021 and 4/30/2022. Since WGVC includes conventionally defined cover crop, it is meaningful to compare WGVC area of 2021 to 2022 winter with the cover crop area most recently surveyed in 2022. In the Nonpoint Source Program (NPSP) 2022 Annual Report for Delaware,12 the cover crop area was 40,811 ha (originally 100,846 acres) or 22% of field crops. When we compare this with total WGVC found (137,297 ha), the difference is 96,486 ha. The first reason of difference is the additional types of vegetation of WGVC that are not defined as conventional cover crop. The second reason is the difference in traditionally defined cover crop area itself between the survey and true value, since the survey methods can be incomplete and biased.13

It should be noted that the usage of cover crops has been found to be increased during the investigation. The cover crop usage has been increased by 50% between 2012 and 2017 in the United States,8 and four times between 2011 and 2021 in the Midwestern United States.47 The 2016 to 2017 report on the fifth annual cover crop survey by the Sustainable Agriculture Research and Education and the Conservation Technology Information Center, where respondents represented 47 states, showed about 25% and 60% increase in the usage of cover crops from 2014 to 2015 and 2016, respectively.48 Also, the 2022-2023 report, where respondents represented 49 states, showed that the mean number of acres of cover crop for respondents who used cover crops, has been increased from 324.9 acres to 413.6 acres between 2018 and 2022.49 This upward trend was verified when comparing the cover crop area surveyed in the NPSP 2022 Annual Report for Delaware12 and the Census of Agriculture (COA) of 201750 for Delaware. The cover crop area of Delaware in the 2022 NPSP annual report was 40,811 ha, which was larger by 5,153 ha when compared with the cropland planted to a cover crop (excluding CRP) (35,658 ha, originally 88,112 acres) in 2017 COA for Delaware. Because vegetation during winter included not only planted cover crops, the WGVC estimated by the proposed method in this study would be more beneficial for some agricultural and environmental modeling applications.

Using OTB for supervised OBIA is challenging because of the lack of user-friendly documents for sampling, training, classification, and accuracy assessment. This is often the case for many free, open-source technologies. However, the object-based classification tutorial51 was informative for forming the workflow of supervised classification using OTB in this study. The current study is expected to contribute to these resources, since we conducted supervised object-based classification with concrete examples in this study. Therefore, this study is applicable to workflow development for other uses of OBIA.

4.

Summary and Conclusion

To identify WGVC areas in Delaware, object-based classification was applied using open-source geospatial technologies. The application of a remote sensing technique (object-based image classification) with LSMS segmentation enabled large-scale WGVC detection, which is efficient in terms of cost, time, and labor. Also, the salt-and-pepper effect was removed by applying an object-based classification approach instead of the traditional pixel-based methodology. Another hurdle in object-based classification, which is fee-based commercial software’s high cost, was overcome using OTB, GDAL, and QGIS, which are free and open-source geospatial technologies. For training NB classifier, samples were chosen for WGVC and non-WGVC classes. For accuracy assessment, classification results were compared with the pan-sharpened Landsat 8,9 images by the operator. The final NB classification results evaluated against pan-sharpened Landsat 8,9 false/true color images were quite satisfactory with confusion matrices showing overall accuracies higher than 85% and KHATs higher than 74% in all cases. The input data was created using NIR, R, and G bands of Sentinel-2 images. These images were free and publicly available with optimal GSD (10 m) for state-level analysis and high temporal resolution (5 days), which enabled the acquisition of cloudless images in a reasonable time interval. The field crop areas from NASS CDL were used as candidate areas of WGVC. The total WGVC area was created by using the Union tool in QGIS, merging cover crop polygons of three dates (12/21/2021, 02/06/2022, and 04/30/2022), and this area was estimated as 137,297 ha overall.

The presented study offers large-scale (state-level) WGVC detection with supervised object-based classification using completely open-source technologies. The presented workflow in this study will be beneficial to future WGVC studies and can be used by organizations with limited time, labor, and funding.

Code and Data Availability

All of the data and software used in this study are publicly available at the sources cited within.

Acknowledgments

This work was funded by the US Department of Agriculture (Grant No. 2020-38821-31105) with additional support from Delaware State University, Michigan Technological University, and Pierce Cedar Creek Institute. We thank H. Tripp for additional technical assistance and anonymous reviewers for helpful feedback on an earlier version of the paper. The authors have no conflicts of interest to declare.

References

1. 

A. T. Rosa et al., “Contributions of individual cover crop species to rainfed maize production in semi-arid cropping systems,” Field Crops Res., 271 108245 https://doi.org/10.1016/j.fcr.2021.108245 FCREDZ 0378-4290 (2021). Google Scholar

2. 

S. De Baets et al., “Cover crops and their erosion-reducing effects during concentrated flow erosion,” Catena., 85 (3), 237 –244 https://doi.org/10.1016/j.catena.2011.01.009 CIJPD3 0341-8162 (2011). Google Scholar

3. 

K. W. Staver and R. B. Brinsfield, “Using cereal grain winter cover crops to reduce groundwater nitrate contamination in the mid-Atlantic coastal plain,” J. Soil Water Conserv., 53 (3), 230 –240 JSWCA3 0022-4561 (1998). Google Scholar

4. 

J. W. Singer et al., “Cover crop effects on nitrogen load in tile drainage from Walnut Creek Iowa using root zone water quality (RZWQ) model,” Agric. Water Manage., 98 (10), 1622 –1628 https://doi.org/10.1016/j.agwat.2011.05.015 AWMADF 0378-3774 (2011). Google Scholar

5. 

W. D. Hively et al., “Remote sensing to monitor cover crop adoption in southeastern Pennsylvania,” J. Soil Water Conserv., 70 (6), 340 –352 https://doi.org/10.2489/jswc.70.6.340 JSWCA3 0022-4561 (2015). Google Scholar

6. 

J. F. Power, Cover Crops, 124 –126 McGraw-Hill, New York (1996). Google Scholar

7. 

P. W. Unger and M. F. Vigil, “Cover crop effects on soil water relationships,” J. Soil Water Conserv., 53 (3), 200 –207 JSWCA3 0022-4561 (1998). Google Scholar

8. 

S. Wallander et al., “Cover crop trends, programs, and practices in the United States,” Econ. Inf. Bull., 222 (2021). Google Scholar

9. 

D. L. Wright, C. Mackowiak and A. Blount, Cover Crops, University of Florida, IFAS Extension Service, Gainesville, Florida (2017). Google Scholar

10. 

C. Banik et al., “Perennial cover crop influences on soil C and N and maize productivity,” Nutr. Cycling Agroecosyst., 116 135 –150 https://doi.org/10.1007/s10705-019-10030-3 (2020). Google Scholar

11. 

C. A. Bartel et al., “Establishment of perennial groundcovers for maize‐based bioenergy production systems,” Agron. J., 109 (3), 822 –835 https://doi.org/10.2134/agronj2016.11.0656 AGJOAT 0002-1962 (2017). Google Scholar

13. 

K. C. Kushal et al., “Assessment of the spatial and temporal patterns of cover crops using remote sensing,” Remote Sens., 13 (14), 2689 https://doi.org/10.3390/rs13142689 RSEND3 (2021). Google Scholar

14. 

E. R. Hunt et al., “NIR-green-blue high-resolution digital images for assessment of winter cover crop biomass,” GISci. Remote Sens., 48 (1), 86 –98 https://doi.org/10.2747/1548-1603.48.1.86 (2011). Google Scholar

15. 

K. Prabhakara, W. D. Hively and G. W. McCarty, “Evaluating the relationship between biomass, percent groundcover and remote sensing indices across six winter cover crop fields in Maryland, United States,” Int. J. Appl. Earth Obs. Geoinf., 39 88 –102 https://doi.org/10.1016/j.jag.2015.03.002 (2015). Google Scholar

16. 

M. L. Campagnolo, J. O. Cerdeira, “Contextual classification of remotely sensed images with integer linear programming,” Computational Modelling of Objects Represented in Images, 123 –128 CRC Press( (2018). Google Scholar

17. 

S. M. De Jong, T. Hornstra and H. G. Maas, “An integrated spatial and spectral approach to the classification of Mediterranean land cover types: the SSC method,” Int. J. Appl. Earth Obs. Geoinf., 3 (2), 176 –183 https://doi.org/10.1016/S0303-2434(01)85009-1 (2001). Google Scholar

18. 

T. Van de Voorde et al., “Extraction of land use/land cover related information from very high resolution data in urban and suburban areas,” in Remote Sens. in Trans. Proc. 23rd Symp. of the Eur. Assoc. of Remote Sens. Lab., 237 –244 (2004). Google Scholar

19. 

J. S. Kim and K. Kim, “Analysis of 2016 Minamiaso landslides using remote sensing and geographic information system,” J. Appl. Remote Sens., 12 (3), 036001 https://doi.org/10.1117/1.JRS.12.036001 (2018). Google Scholar

20. 

A. I. De Castro et al., “Mapping cynodon dactylon infesting cover crops with an automatic decision tree-OBIA procedure and UAV imagery for precision viticulture,” Remote Sens., 12 (1), 56 https://doi.org/10.3390/rs12010056 RSEND3 (2019). Google Scholar

21. 

, “Trimble eCognition | Trimble geospatial,” https://geospatial.trimble.com/products-and-solutions/trimble-ecognition (2023). Google Scholar

24. 

EU & ESA, “Sentinel browser,” (2023). https://dataspace.copernicus.eu/browser/ Google Scholar

25. 

USGS, “EarthExplorer,” (2024). https://earthexplorer.usgs.gov/ Google Scholar

26. 

F. Wamerdam et al., “Gdalbuildvrt – GDAL documentation,” (2022). https://gdal.org/programs/gdalbuildvrt.html Google Scholar

27. 

J. Michel, D. Youssefi and M. Grizonnet, “Stable mean-shift algorithm and its application to the segmentation of arbitrarily large remote sensing images,” IEEE Trans. Geosci. Remote Sens., 53 (2), 952 –964 https://doi.org/10.1109/TGRS.2014.2330857 IGRSD2 0196-2892 (2014). Google Scholar

28. 

, “Orfeo ToolBox - Orfeo ToolBox is not a black box,” https://www.orfeo-toolbox.org/ (2019). Google Scholar

29. 

M. Grizonnet et al., “Orfeo ToolBox: open source processing of remote sensing images,” Open Geospatial Data, Software Stand., 2 (1), 1 –8 https://doi.org/10.1186/s40965-017-0031-6 (2017). Google Scholar

30. 

K. Fukunaga and L. Hostetler, “The estimation of the gradient of a density function, with applications in pattern recognition,” IEEE Trans. Inf. Theory, 21 (1), 32 –40 https://doi.org/10.1109/TIT.1975.1055330 IETTAW 0018-9448 (1975). Google Scholar

31. 

D. Ming et al., “Semivariogram-based spatial bandwidth selection for remote sensing image segmentation with mean-shift algorithm,” IEEE Geosci. Remote Sens. Lett., 9 (5), 813 –817 https://doi.org/10.1109/LGRS.2011.2182604 (2012). Google Scholar

32. 

B. Georgescu, I. Shimshoni and P. Meer, “Mean shift based clustering in high dimensions: a texture classification example,” in IEEE Int. Conf. Comput. Vision, 456 –456 (2003). https://doi.org/10.1109/ICCV.2003.1238382 Google Scholar

33. 

M. Á. Carreira-Perpiñán, “Clustering methods based on kernel density estimators: meanshift algorithms,” Handbook of Cluster Analysis, 383 –418 CRC/Chapman and Hall, Boca Raton, Florida (2015). Google Scholar

34. 

CNES, “LSMSSegmentaiton – Orfeo Toolbox 8.0.1 documentation,” (2022). https://www.orfeo-toolbox.org/CookBook/Applications/app_LSMSSegmentation.html Google Scholar

35. 

USDA National Agricultural Statistics Service Cropland Data Layer, “Published crop-specific data layer [Online],” (2021). https://nassgeodata.gmu.edu/CropScape Google Scholar

36. 

W. D. Hively et al., “Estimating the effect of winter cover crops on nitrogen leaching using cost-share enrollment data, satellite remote sensing, and Soil and Water Assessment Tool (SWAT) modeling,” J. Soil Water Conserv., 75 (3), 362 –375 https://doi.org/10.2489/jswc.75.3.362 JSWCA3 0022-4561 (2020). Google Scholar

37. 

, “Welcome to QGIS project!,” http://www.qgis.org (2022). Google Scholar

38. 

, “2021 Delaware cropland data layer | NASS/USDA,” https://www.nass.usda.gov/Research_and_Science/Cropland/metadata/metadata_de21.htm (2022). Google Scholar

39. 

S. Hao, Y. Cui and J. Wang, “Segmentation scale effect analysis in the object-oriented method of high-spatial-resolution image classification,” Sensors, 21 (23), 7935 https://doi.org/10.3390/s21237935 SNSRES 0746-9462 (2021). Google Scholar

40. 

, “Normal Bayes classifier — OpenCV 2.4.13.7 documentation,” https://docs.opencv.org/2.4/modules/ml/doc/normal_bayes_classifier.html (2014). Google Scholar

41. 

Y. Qian et al., “Comparing machine learning classifiers for object-based land cover classification using very high resolution imagery,” Remote Sens., 7 (1), 153 –168 https://doi.org/10.3390/rs70100153 RSEND3 (2014). Google Scholar

42. 

T. Lillesand, R. W. Kiefer and J. Chipman, Remote Sensing and Image Interpretation, John Wiley & Sons( (2015). Google Scholar

43. 

F. Wamerdam et al., “gdal_pansharpen.py – GDAL documentation,” (2024). https://gdal.org/programs/gdal_pansharpen.html Google Scholar

44. 

J. Radoux et al., “Thematic accuracy assessment of geographic object-based image classification,” Int. J. Geogr. Inf. Sci., 25 (6), 895 –911 https://doi.org/10.1080/13658816.2010.498378 (2011). Google Scholar

45. 

M. G. MacLean and R. G. Congalton, “Map accuracy assessment issues when using an object-oriented approach,” in Proc. Am. Soc. for Photogramm. and Remote Sens. Annu. Conf., 19 –23 (2012). Google Scholar

46. 

J. R. Landis and G. G. Koch, “The measurement of observer agreement for categorical data,” Biometrics, 33 159 –174 https://doi.org/10.2307/2529310 BIOMB6 0006-341X (1977). Google Scholar

47. 

Q. Zhou et al., “Recent rapid increase of cover crop adoption across the US Midwest detected by fusing multi‐source satellite data,” Geophys. Res. Lett., 49 (22), e2022GL100249 https://doi.org/10.1029/2022GL100249 GPRLAJ 0094-8276 (2022). Google Scholar

48. 

CTIC, North Central SARE, and ASTA, Annual Report 2016-17 Cover Crop Survey, Joint publication of the Conservation Technology Information Center, the North Central Region Sustainable Agriculture Research and Education Program, and the American Seed Trade Association, West Lafayette, Indiana (2017). Google Scholar

49. 

CTIC, North SARE, and ASTA, National Cover Crop Survey Report 2022-2023, Joint publication of the Conservation Technology Information Center, the Sustainable Agriculture Research and Education Program, and the American Seed Trade Association, West Lafayette, Indiana (2023). Google Scholar

50. 

, “2017 census of agriculture - state data,” https://www.nass.usda.gov/Publications/AgCensus/2017/ (2019). Google Scholar

51. 

, “Object-based classification (tutorial) – AWF-Wiki,” http://wiki.awf.forst.uni-goettingen.de/wiki/index.php/Exercise_10:_Object-based_classification (2021). Google Scholar

Biography

Jae Sung Kim is an assistant professor in the Department of Civil, Environmental, and Geospatial Engineering at Michigan Technological University. He received his PhD in geomatics from the School of Civil Engineering at Purdue University. His research interests are remote sensing, photogrammetry, GIS, geodesy, and geospatial cyberinfrastructure.

Sara P. Syswerda is the education director at Pierce Cedar Creek Institute. She received her PhD in crops and soil science and ecology, evolutionary biology, and behavior from Michigan State University. Her research interests are in carbon and nitrogen biogeochemistry, ecosystem services, and science education.

Sigrid D. P. Smith is an associate professor/biostatistician in the Department of Agriculture and Natural Resources at Delaware State University. She received her PhD in ecology, evolution, and conservation biology from University of Illinois at Urbana-Champaign. Her research interests include species interactions, ecological community stability, and feedbacks between ecosystems and human communities.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Jae Sung Kim, Sara P. Syswerda, and Sigrid D. P. Smith "Detection of wintertime green vegetated cover using object-based classification with open-source remote sensing and geospatial technologies," Journal of Applied Remote Sensing 18(2), 024515 (17 June 2024). https://doi.org/10.1117/1.JRS.18.024515
Received: 26 May 2023; Accepted: 27 May 2024; Published: 17 June 2024
Advertisement
Advertisement
KEYWORDS
Landsat

Remote sensing

Image classification

Agriculture

Image segmentation

Accuracy assessment

Education and training

Back to Top