Edge computing in remote sensing often necessitates on-device learning due to bandwidth and latency constraints. However, limited memory and computational power on edge devices pose challenges for traditional machine learning approaches with large datasets and complex models. Continuous learning offers a potential solution for these scenarios by enabling models to adapt to evolving data streams. This paper explores the concept of leveraging a strategically selected subset of archival training data to improve performance in continual learning. We introduce a feedback-based intelligent data sampling method that utilizes a log-normal distribution to prioritize informative data points from the original training set, focusing on samples which the model struggled with during initial training. This simulation-based exploration investigates the trade-off between accuracy gains and resource utilization with different data inclusion rates, paving the way for the deployment of this approach in real-world edge devices. This approach can lead to better decision making in the field, improved operational efficiency through reduced reliance on cloud resources, and greater system autonomy for remotely sensing tasks. This will lead to the development of robust and efficient edge-based learning systems that enable real-time, autonomous, and data-driven decisions for critical tasks in remote locations.
|