Breast cancer cell analysis has traditionally focused on cell and intracellular organelle morphology. Recent research has demonstrated that organelle topology-based cancer cell classification is considerably more accurate when using handcrafted feature extraction and machine learning-based classifiers on fluorescent confocal microscopy images. However, feature extraction and classification through this methodology requires manual segmentation and computational organelle rendering. Herein, we employ convolutional neural networks (CNN) and Gradient-weighted Class Activation Mapping (GradCAM) for fast end-to-end classification and visual interpretation of confocal fluorescent microscopy images based on spatial organelle features. First, raw 3D images are filtered and preprocessed into 2D image patches for the CNN. To replicate feature analysis of the surface-surface contact area, marginal intermediate fusion CNN is implemented to classify each patch. GradCAM is then used post hoc to generate a representative heatmap of important areas used for classification. All relevant heatmap patches are then reconstructed based on the extraction of their respective patches to obtain an overall heatmap of the entire microscopy image. Furthermore, finer-grained heatmaps were obtained through the use of patch overlap and weighting during initial patch preprocessing. On a dataset consisting of 6 different breast cancer cell lines, this methodology resulted in a classification accuracy of 95.7% while also providing visualization of areas indicative of certain cancer cell lines. These findings demonstrate the efficacy of using deep learning and GradCAM for fast and interpretable organelle-based cancer cell classification.
|