Convolutional neural networks (CNNs) perform excellently in many image processing and computer vision tasks. However, their complex structure and the vast number of parameters require substantial computational and storage resources. This makes them challenging to implement, especially on mobile or embedded devices. Furthermore, CNN soften face issues limited transferability and susceptibility to overfitting. Sparsity and pruning are commonly used techniques to address these issues. Current methods include Spatial Dropout, block sparsity, structured pruning, dynamic pruning, and model-independent retraining-free sparsity. Our algorithm enhances CNNs by implementing a Dropout-like operation within the convolutional kernels. Drawing inspiration from sparse CNN and ROCKET methods, this approach employs randomly sparse convolutional kernels to reduce the data density processed during convolution operations. This novel method improves performance and efficiency, demonstrating its potential as a significant advancement in CNN architecture. The method is tested on serval popular datasets by adjusting parameters within the same model and on different hardware. It demonstrates improved training speed and accuracy reduced overfitting -- compared to traditional CNNs -- as measured by FLOPs and validation dataset accuracy.
|