In this study, we trained a convolutional neural network (CNN) utilizing a mix of recent CNN architectural design strategies. Our goals are to leverage these modern techniques to improve the binary classification of kidney tumor images obtained using Multi-Photon Microscopy (MPM). We demonstrate that incorporating these newer model design elements, coupled with transfer learning, image standardization, and data augmentation, leads to significantly increased classification performance over previous results. Our best model averages over 90% sensitivity, specificity, accuracy, area under the receiver operating characteristic curve (AUROC) in image-level classification across cross-validation folds, superior to the previous best in all four metrics.
Convolutional neural networks (CNN) are a class of machine learning model that are especially well suited for imagebased tasks. In this study, we design and train a CNN on tissue samples imaged using Multi-Photon Microscopy (MPM) and show that the model can distinguish between chromophobe renal cell carcinoma (chRCC) and oncocytoma. We demonstrate the method to train a model using simple max-pooling vote fusion, and use the model to highlight regions of the input that cause a positive classification. The model can be tuned for higher sensitivity at the cost of specificity with a constant threshold and little impact to accuracy overall. Several numerical experiments were run to measure the model’s accuracy on both image and patient level analysis. Our models were designed with a dropout parameter that biases the model towards higher sensitivity or specificity. Our best performance model, as measured by area under the receiver operating characteristic curve (AUC of ROC, or AUROC) on patient level classification, is measured with a 94% AUROC and 88% accuracy, along with 100% sensitivity and 75% specificity.
A clear distinction between oncocytoma and chromophobe renal cell carcinoma (chRCC) is critically important for clinical management of patients. But it may often be difficult to distinguish the two entities based on hematoxylin and eosin (H and E) stained sections alone. In this study, second harmonic generation (SHG) signals which are very specific to collagen were used to image collagen fibril structure. We conduct a pilot study to develop a new diagnostic method based on the analysis of collagen associated with kidney tumors using convolutional neural networks (CNNs). CNNs comprise a type of machine learning process well-suited for drawing information out of images. This study examines a CNN model’s ability to differentiate between oncocytoma (benign), and chRCC (malignant) kidney tumor images acquired with second harmonic generation (SHG), which is very specific for collagen matrix. To the best of our knowledge, this is the first study that attempts to distinguish the two entities based on their collagen structure. The model developed from this study demonstrated an overall classification accuracy of 68.7% with a specificity of 66.3% and sensitivity of 74.6%. While these results reflect an ability to classify the kidney tumors better than chance, further studies will be carried out to (a) better realize the tumor classification potential of this method with a larger sample size and (b) combining SHG with two-photon excited intrinsic fluorescence signal to achieve better classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.