Deep learning models are widely used because of their high accuracy in solving classification problems in spectroscopy, but they lack interpretability. The challenge lies in the balance between interpretability and accuracy. Current interpretive methods, such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), can sometimes provide mathematical meaning but physically implausible interpretations by perturbing individual feature values. To address this gap, our research proposes a group-focused methodology that targets 'spectral zones' to estimate the impact of collective spectral features directly. This approach enhances the interpretability of deep learning models, diminishes noisy data, and provides a more comprehensive understanding of model behaviors. By applying group perturbations, the resultant interpretations are not only more intuitive but also offer results that are easily comparable with domain expertise, thus leading to an enriched analysis of the model's decision-making processes.
|