A change in tissue stiffness can indicate pathological diseases and therefore supports physicians in diagnosis and treatment. Ultrasound shear wave elastography (US-SWEI) can be used to quantify tissue stiffness by estimating the velocity of propagating shear waves. While a linear US probe with a lateral imaging width of approximately 40 mm is commonly used and US-SWEI has been successfully demonstrated, some clinical applications, such as laparoscopic or endoscopic interventions, require small probes. This limits the lateral image width to the millimeter range and reduces the available information in the US images substantially. In this work, we systematically analyze the effect of a reduced lateral imaging width for shear wave velocity estimation using the conventional time-of-flight (ToF) method and spatio-temporal convolutional neural networks (ST-CNNs). For our study, we use tissue mimicking gelatin phantoms with varying stiffness and resulting shear wave velocities in the range from 3.63 m/s to 7.09 m/s. We find that lateral imaging width has a substantial impact on the performance of ToF, while shear wave velocity estimation with ST-CNNs remains robust. Our results show that shear wave velocity estimation with ST-CNN can even be performed for a lateral imaging width of 2.1 mm resulting in a mean absolute error of 0.81 ± 0.61 m/s.
Lesion detection in brain Magnetic Resonance Images (MRIs) remains a challenging task. MRIs are typically read and interpreted by domain experts, which is a tedious and time-consuming process. Recently, unsupervised anomaly detection (UAD) in brain MRI with deep learning has shown promising results to provide a quick, initial assessment. So far, these methods only rely on the visual appearance of healthy brain anatomy for anomaly detection. Another biomarker for abnormal brain development is the deviation between the brain age and the chronological age, which is unexplored in combination with UAD. We propose deep learning for UAD in 3D brain MRI considering additional age information. We analyze the value of age information during training, as an additional anomaly score, and systematically study several architecture concepts. Based on our analysis, we propose a novel deep learning approach for UAD with multi-task age prediction. We use clinical T1-weighted MRIs of 1735 healthy subjects and the publicly available BraTs 2019 data set for our study. Our novel approach significantly improves UAD performance with an AUC of 92.60% compared to an AUC-score of 84.37% using previous approaches without age information.
Medulloblastoma (MB) is the most common malignant brain tumor in childhood. The diagnosis is generally based on the microscopic evaluation of histopathological tissue slides. However, visual-only assessment of histopathological patterns is a tedious and time-consuming task and is also affected by observer variability. Hence, automated MB tumor classification could assist pathologists by promoting consistency and robust quantification. Recently, convolutional neural networks (CNNs) have been proposed for this task, while transfer learning has shown promising results. In this work, we propose an end-to-end MB tumor classification and explore transfer learning with various input sizes and matching network dimensions. We focus on differentiating between the histological subtypes classic and desmoplastic/nodular. For this purpose, we systematically evaluate recently proposed EfficientNets, which uniformly scale all dimensions of a CNN. Using a data set with 161 cases, we demonstrate that pre-trained EfficientNets with larger input resolutions lead to significant performance improvements compared to commonly used pre-trained CNN architectures. Also, we highlight the importance of transfer learning, when using such large architectures. Overall, our best performing method achieves an F1-Score of 80.1%.
The initial assessment of skin lesions is typically based on dermoscopic images. As this is a difficult and time-consuming task, machine learning methods using dermoscopic images have been proposed to assist human experts. Other approaches have studied electrical impedance spectroscopy (EIS) as a basis for clinical decision support systems. Both methods represent different ways of measuring skin lesion properties as dermoscopy relies on visible light and EIS uses electric currents. Thus, the two methods might carry complementary features for lesion classification. Therefore, we propose joint deep learning models considering both EIS and dermoscopy for melanoma detection. For this purpose, we first study machine learning methods for EIS that incorporate domain knowledge and previously used heuristics into the design process. As a result, we propose a recurrent model with state-max-pooling which automatically learns the relevance of different EIS measurements. Second, we combine this new model with different convolutional neural networks that process dermoscopic images. We study ensembling approaches and also propose a cross-attention module guiding information exchange between the EIS and dermoscopy model. In general, combinations of EIS and dermoscopy clearly outperform models that only use either EIS or dermoscopy. We show that our attention-based, combined model outperforms other models with specificities of 34:4% (CI 31:3-38:4), 34:7% (CI 31:0-38:8) and 53:7% (CI 50:1-57:6) for dermoscopy, EIS and the combined model, respectively, at a clinically relevant sensitivity of 98 %.
Early detection of head and neck tumors is crucial for patient survival. Often, diagnoses are made based on endoscopic examination of the larynx followed by biopsy and histological analysis, leading to a high interobserver variability due to subjective assessment. In this regard, early non-invasive diagnostics independent of the clinician would be a valuable tool. A recent study has shown that hyperspectral imaging (HSI) can be used for non-invasive detection of head and neck tumors, as precancerous or cancerous lesions show specific spectral signatures that distinguish them from healthy tissue. However, HSI data processing is challenging due to high spectral variations, various image interferences, and the high dimensionality of the data. Therefore, performance of automatic HSI analysis has been limited and so far, mostly ex-vivo studies have been presented with deep learning. In this work, we analyze deep learning techniques for in-vivo hyperspectral laryngeal cancer detection. For this purpose we design and evaluate convolutional neural networks (CNNs) with 2D spatial or 3D spatio-spectral convolutions combined with a state-of-the-art Densenet architecture. For evaluation, we use an in-vivo data set with HSI of the oral cavity or oropharynx. Overall, we present multiple deep learning techniques for in-vivo laryngeal cancer detection based on HSI and we show that jointly learning from the spatial and spectral domain improves classification accuracy notably. Our 3D spatio-spectral Densenet achieves an average accuracy of 81%.
Here we present a study where we used in vivo hyperspectral imaging (HSI) for the detection of upper aerodigestive tract (UADT) cancer. Hyperspectral datasets were recorded in 100 patients before surgery in vivo. We established an automated data interpretation pathway that can classify the tissue into healthy and tumorous using, different deep learning techniques. Our method is based on convolutional neural networks (CNNs) with 2D spatial or 3D spatio-spectral convolutions combined with a state-of-the-art Densenet architecture. Using both the spatial and spectral domain improves classification accuracy notably. Our 3D spatio-spectral Densenet classification method achieves an average accuracy of over 80%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.