Histopathology analysis of thyroid nodule is the current gold standard for the differential diagnosis of thyroid tumors. Deep learning methods have been extensively used for the diagnosis of histopathology images. We look into the possibility of the differential diagnosis of thyroid tumors by analysing histopathology images of thyroid nodule capsules using different deep learning methods. Residual Network (ResNet), Densely Connected Network (DenseNet) and Vision Transformer (ViT). Our study shows the superiority of the histopathology images of thyroid nodule capsules for the differential diagnosis of thyroid tumors compared to histopathology images of thyroid nodules.
KEYWORDS: Data modeling, Chest imaging, Education and training, Machine learning, Performance modeling, Deep learning, Systems modeling, Design and modelling, Data privacy, Statistical modeling, Medical imaging, Computer aided detection
Deep learning models have achieved great success for the automated analysis of chest x-rays. However, many such models lack generalizability, i.e., a model trained in one dataset often performs poorly in a different dataset. One possible reason of such performance drop is the difference in the distribution of data from different institutions. In this context, utilization of data from multiple institutions to train a deep learning model may be helpful towards including a wider variety of data during training. This can improve the generalizability of the trained model. However, such an approach do not to preserve data privacy. To deal with the aforementioned limitation, federated learning may be useful. Federated learning allows multiple institutions to develop a machine learning model utilizing data from all institutions without sharing the data. Thus, federated learning approaches help in preserving data privacy. Although there has been a significant advancement in federated learning, such methods are rare in the context of chest x-ray diagnosis. Furthermore, most of such models do not utilize chest x-ray datasets from multiple institutions. In this work, we design a federated learning framework for chest x-ray diagnosis using datasets from multiple institutions. Our model shows improved generalizability in chest x-ray diagnosis across several publicly available large-scale chest x-ray datasets.
Universal lesion detection and tagging (ULDT) in CT studies is critical for tumor burden assessment and tracking the progression of lesion status (growth/shrinkage) over time. However, a lack of fully annotated data hinders the development of effective ULDT approaches. Prior work used the DeepLesion dataset (4,427 patients, 10,594 studies, 32,120 CT slices, 32,735 lesions, 8 body part labels) for algorithmic development, but this dataset is not completely annotated and contains class imbalances. To address these issues, in this work, we developed a self-training pipeline for ULDT. A VFNet model was trained on a limited 11.5% subset of DeepLesion (bounding boxes + tags) to detect and classify lesions in CT studies. Then, it identified and incorporated novel lesion candidates from a larger unseen data subset into its training set, and self-trained itself over multiple rounds. Multiple self-training experiments were conducted with different threshold policies to select predicted lesions with higher quality and cover the class imbalances. We discovered that direct self-training improved the sensitivities of over-represented lesion classes at the expense of under-represented classes. However, upsampling the lesions mined during self-training along with a variable threshold policy yielded a 6.5% increase in sensitivity at 4 FP in contrast to self-training without class balancing (72% vs 78.5%) and a 11.7% increase compared to the same self-training policy without upsampling (66.8% vs 78.5%). Furthermore, we show that our results either improved or maintained the sensitivity at 4FP for all 8 lesion classes.
Radiologists routinely perform the tedious task of lesion localization, classification, and size measurement in computed tomography (CT) studies. Universal lesion detection and tagging (ULDT) can simultaneously help alleviate the cumbersome nature of lesion measurement and enable tumor burden assessment. Previous ULDT approaches utilize the publicly available DeepLesion dataset, however it does not provide the full volumetric (3D) extent of lesions and also displays a severe class imbalance. In this work, we propose a self-training pipeline to detect 3D lesions and tag them according to the body part they occur in. We used a significantly limited 30% subset of DeepLesion to train a VFNet model for 2D lesion detection and tagging. Next, the 2D lesion context was expanded into 3D, and the mined 3D lesion proposals were integrated back into the baseline training data in order to retrain the model over multiple rounds. Through the self-training procedure, our VFNet model learned from its own predictions, detected lesions in 3D, and tagged them. Our results indicated that our VFNet model achieved an average sensitivity of 46.9% at [0.125:8] false positives (FP) with a limited 30% data subset in comparison to the 46.8% of an existing approach that used the entire DeepLesion dataset. To our knowledge, we are the first to jointly detect lesions in 3D and tag them according to the body part label.
KEYWORDS: Image segmentation, Pancreas, Computed tomography, 3D modeling, Data modeling, Magnetic resonance imaging, Tumors, 3D image processing, Tissues
A persistent issue in deep learning (DL) is the inability of models to function in a domain in which they were not trained. For example, a model trained to segment an organ in MRI scans often dramatically fails when tested in the domain of computed tomography (CT) scans. Since manual segmentation is extremely timeconsuming, it is often not feasible to acquire an annotated dataset in the target domain. Domain adaptation allows transfer of knowledge about a labelled source domain into a target domain. In this work, we attempt to address the differences in model performance when segmenting from intravenous contrast (IVC) enhanced or from non-contrast (NC) CT scans. Most of the publicly available, large-scale, annotated CT datasets are IVCenhanced. However, physicians frequently use NC scans in clinical practice. This necessitates methods capable of reliably functioning across both domains. We propose a novel DL framework that can segment the pancreas from non-contrast CT scans through training with the help of IVC-enhanced CT scans. Our method first utilizes a CycleGAN to create synthetic NC (s-NC) variants from IVC scans. Subsequently, we introduce a multilevel 3D UNet architecture to perform pancreas segmentation. The proposed method significantly outperforms the baseline. Experimental results show 6.2% percent improvement compared to the baseline model in terms of the Dice coefficient. To our knowledge, this method is the first of its kind in pancreas segmentation from NC CTs.
We propose a fast few-shot learning framework that uses transfer learning to identify different lung and chest diseases and conditions from chest x-rays. Our model can be trained with as few as five training examples, making it potentially applicable for diagnosis of rare diseases. In this work, we divide different chest diseases into two disjoint categories: (i) base classes (with large training set) and (ii) novel classes (with a few training examples per class). Our method consists of two steps, namely feature extraction and classification. For the feature extraction, we employ a deep convolutional neural network, customized for chest x-rays. We train the feature extractor with data only from base classes. So the novel classes are unseen to the feature extractor during training. However, we use the feature extractor for extracting features from the data of novel classes resulting in transfer learning. Our classifier, on the other hand, uses the data only from the novel classes for training. We introduce the idea of autoencoder ensemble to design the classifier. Only a few feature vectors from each of the novel classes are used for training the classifier making it a few-shot learner. Incorporating new novel classes require training only the classifier which makes the entire process extremely fast. The performance of the classifier is evaluated on the test data from the novel classes. Experiments show _ 18% improvement in the F1 score compared to the baseline on identifying the novel diseases from publicly available chest x-ray dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.