An improved pyramid deconvolutional neural network is proposed to fine-grained segment pulmonary nodules of CT scan images. The fully convolutional neural network (FCN) can train images end-to-end, pixel-to-pixel, realizing object detection, segmentation and classification in one single CNN structure. However, the original FCN is utilized by the natural object tasks, which can hardly maintain the precision degree required by the medical images. To further improve the detection precision and segment accuracy, we improve the FCN by fusing more pooling layers, because the deconvolution of higher convolution layers give the coarser segmentations and lower convolution layers generate detail contour. The experiment is based on LIDC- IDRI datasets. Tenfold cross-validation is used to train and evaluate the performance. The experiment shows that the detection precise and the fineness of segmentation ascend with the number of the fused pooling layers. The detection rate can be achieved as high as 0.931 ± 0.042. Meanwhile, for the segmentation performance evaluation, the score of intersection over Union (IoU) is applied, reaching 0.628 ± 0.065. And the overlap rate (i.e. the overlap percentage of the segment result compared with the original label) is also calculated. The same as the detect accuracy, the improved architecture, which fuses more pooling layers, achieves the highest overlap rate, which is 0.739 ± 0.076.
The temporal focusing two-photon microscope (TFM) is developed to perform depth resolved wide field fluorescence imaging by capturing frames sequentially. However, due to strong nonignorable noises and diffraction rings surrounding particles, further researches are extremely formidable without a precise particle localization technique. In this paper, we developed a fully-automated scheme to locate particles positions with high noise tolerance. Our scheme includes the following procedures: noise reduction using a hybrid Kalman filter method, particle segmentation based on a multiscale kernel graph cuts global and local segmentation algorithm, and a kinematic estimation based particle tracking method. Both isolated and partial-overlapped particles can be accurately identified with removal of unrelated pixels. Based on our quantitative analysis, 96.22% isolated particles and 84.19% partial-overlapped particles were successfully detected.
To automatic detect lung nodules from CT images, we designed a two stage computer aided detection (CAD) system. The first stage is graph cuts segmentation to identify and segment the nodule candidates, and the second stage is convolutional neural network for false positive reduction. The dataset contains 595 CT cases randomly selected from Lung Image Database Consortium and Image Database Resource Initiative (LIDC/IDRI) and the 305 pulmonary nodules achieved diagnosis consensus by all four experienced radiologists were our detection targets. Consider each slice as an individual sample, 2844 nodules were included in our database. The graph cuts segmentation was conducted in a two-dimension manner, 2733 lung nodule ROIs are successfully identified and segmented. With a false positive reduction by a seven-layer convolutional neural network, 2535 nodules remain detected while the false positive dropped to 31.6%. The average F-measure of segmented lung nodule tissue is 0.8501.
Deep learning is a trending promising method in medical image analysis area, but how to efficiently prepare the input image for the deep learning algorithms remains a challenge. In this paper, we introduced a novel artificial multichannel region of interest (ROI) generation procedure for convolutional neural networks (CNN). From LIDC database, we collected 54880 benign nodule samples and 59848 malignant nodule samples based on the radiologists’ annotations. The proposed CNN consists of three pairs of convolutional layers and two fully connected layers. For each original ROI, two new ROIs were generated: one contains the segmented nodule which highlighted the nodule shape, and the other one contains the gradient of the original ROI which highlighted the textures. By combining the three channel images into a pseudo color ROI, the CNN was trained and tested on the new multichannel ROIs (multichannel ROI II). For the comparison, we generated another type of multichannel image by replacing the gradient image channel with a ROI contains whitened background region (multichannel ROI I). With the 5-fold cross validation evaluation method, the CNN using multichannel ROI II achieved the ROI based area under the curve (AUC) of 0.8823±0.0177, compared to the AUC of 0.8484±0.0204 generated by the original ROI. By calculating the average of ROI scores from one nodule, the lesion based AUC using multichannel ROI was 0.8793±0.0210. By comparing the convolved features maps from CNN using different types of ROIs, it can be noted that multichannel ROI II contains more accurate nodule shapes and surrounding textures.
Two-photon fluorescence microscopy (TPFM) is a perfect optical imaging equipment to monitor the interaction between fast moving viruses and hosts. However, due to strong unavoidable background noises from the culture, videos obtained by this technique are too noisy to elaborate this fast infection process without video image processing. In this study, we developed a novel scheme to eliminate background noises, recover background bacteria images and improve video qualities. In our scheme, we modified and implemented the following methods for both host and virus videos: correlation method, round identification method, tree-structured nonlinear filters, Kalman filters, and cell tracking method. After these procedures, most of noises were eliminated and host images were recovered with their moving directions and speed highlighted in the videos. From the analysis of the processed videos, 93% bacteria and 98% viruses were correctly detected in each frame on average.
Based on the likelihood of malignancy, the nodules are classified into five different levels in Lung Image Database Consortium (LIDC) database. In this study, we tested the possibility of using threedimensional (3D) texture features to identify the malignancy level of each nodule. Five groups of features were implemented and tested on 172 nodules with confident malignancy levels from four radiologists. These five feature groups are: grey level co-occurrence matrix (GLCM) features, local binary pattern (LBP) features, scale-invariant feature transform (SIFT) features, steerable features, and wavelet features. Because of the high dimensionality of our proposed features, multidimensional scaling (MDS) was used for dimension reduction. RUSBoost was applied for our extracted features for classification, due to its advantages in handling imbalanced dataset. Each group of features and the final combined features were used to classify nodules highly suspicious for cancer (level 5) and moderately suspicious (level 4). The results showed that the area under the curve (AUC) and accuracy are 0.7659 and 0.8365 when using the finalized features. These features were also tested on differentiating benign and malignant cases, and the reported AUC and accuracy were 0.8901 and 0.9353.
Deep learning is considered as a popular and powerful method in pattern recognition and classification. However, there are not many deep structured applications used in medical imaging diagnosis area, because large dataset is not always available for medical images. In this study we tested the feasibility of using deep learning algorithms for lung cancer diagnosis with the cases from Lung Image Database Consortium (LIDC) database. The nodules on each computed tomography (CT) slice were segmented according to marks provided by the radiologists. After down sampling and rotating we acquired 174412 samples with 52 by 52 pixel each and the corresponding truth files. Three deep learning algorithms were designed and implemented, including Convolutional Neural Network (CNN), Deep Belief Networks (DBNs), Stacked Denoising Autoencoder (SDAE). To compare the performance of deep learning algorithms with traditional computer aided diagnosis (CADx) system, we designed a scheme with 28 image features and support vector machine. The accuracies of CNN, DBNs, and SDAE are 0.7976, 0.8119, and 0.7929, respectively; the accuracy of our designed traditional CADx is 0.7940, which is slightly lower than CNN and DBNs. We also noticed that the mislabeled nodules using DBNs are 4% larger than using traditional CADx, this might be resulting from down sampling process lost some size information of the nodules.
A novel breast cancer risk analysis approach is proposed for enhancing performance of computerized breast cancer risk analysis using bilateral mammograms. Based on the intensity of breast area, five different sub-regions were acquired from one mammogram, and bilateral features were extracted from every sub-region. Our dataset includes 180 bilateral mammograms from 180 women who underwent routine screening examinations, all interpreted as negative and not recalled by the radiologists during the original screening procedures. A computerized breast cancer risk analysis scheme using four image processing modules, including sub-region segmentation, bilateral feature extraction, feature selection, and classification was designed to detect and compute image feature asymmetry between the left and right breasts imaged on the mammograms. The highest computed area under the curve (AUC) is 0.763 ± 0.021 when applying the multiple sub-region features to our testing dataset. The positive predictive value and the negative predictive value were 0.60 and 0.73, respectively. The study demonstrates that (1) features extracted from multiple sub-regions can improve the performance of our scheme compared to using features from whole breast area only; (2) a classifier using asymmetry bilateral features can effectively predict breast cancer risk; (3) incorporating texture and morphological features with density features can boost the classification accuracy.
A novel three stage Semi-Supervised Learning (SSL) approach is proposed for improving performance of computerized breast cancer analysis with undiagnosed data. These three stages include: (1) Instance selection, which is barely used in SSL or computerized cancer analysis systems, (2) Feature selection and (3) Newly designed ‘Divide Co-training’ data labeling method. 379 suspicious early breast cancer area samples from 121 mammograms were used in our research. Our proposed ‘Divide Co-training’ method is able to generate two classifiers through split original diagnosed dataset (labeled data), and label the undiagnosed data (unlabeled data) when they reached an agreement. The highest AUC (Area Under Curve, also called Az value) using labeled data only was 0.832 and it increased to 0.889 when undiagnosed data were included. The results indicate instance selection module could eliminate untypical data or noise data and enhance the following semi-supervised data labeling performance. Based on analyzing different data sizes, it can be observed that the AUC and accuracy go higher with the increase of either diagnosed data or undiagnosed data, and reach the best improvement (ΔAUC = 0.078, ΔAccuracy = 7.6%) with 40 of labeled data and 300 of unlabeled data.
We use two-photon microscopy to monitor the infection process of marine zooplankton, Cafeteria roenbergensis (C.roenbergensis), by Cafeteria roenbergensis virus (CroV), a giant DNA virus named after its host. Here, we image C.roenbergensis in culture by two-photon excited NADH autofluorescence at video-rate (30 frame/s), and the movement of C.roenbergensis is recorded in live videos. Moreover, CroV is stained with DNA dye SYBR gold and recorded simultaneously with this two-photon microscope. We observed the initial infection moment with this method. The result demonstrates the potential use of two-photon microscopy to investigate the fast dynamic interaction between C.roenbergensis with virus CroV. After catching this initial moment, we will freeze the sample in liquid nitrogen for cryo-electron microscopy (EM) study to resolve the virus-host interaction at molecular level. The long-term goal is to study similar fast moving pathogen-host interaction process which could lead to important medical applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.