Lung cancer has both high incidence and mortality rates compared to other cancer types. One important factor for improved patient survival is early detection. Deep learning for lung nodule detection has been extensively studied, as a tool to facilitate clinicians with early nodule detection and classification. Many publications are reporting high detection accuracy and several models have been introduced to clinical practice. However, certain models may have reduced performance in real-world clinical practice. In this study, we introduce a method to assess the robustness of lung nodule detection models. Medically relevant image perturbations are used to assess the robustness of these models. The perturbations include noise and motion perturbations, which have been created in consultation with an expert radiologist to ensure the clinical relevance of the artifacts for thoracic computed tomography (CT) scans. The evaluated models demonstrate robustness to clinically relevant noise simulations, but it shows less resilience to motion artifacts in perturbed CT scans. This robustness evaluation method, incorporating simulated relevant artifacts, can be extended for use in other applications involving the analysis of CT scans.
The majority of the encouraging experimental results published on AI-based endoscopic Computer-Aided Detection (CAD) systems have not yet been reproduced in clinical settings, mainly due to highly curated datasets used throughout the experimental phase of the research. In a realistic clinical environment, these necessary high image-quality standards cannot be guaranteed, and the CAD system performance may degrade. While several studies have previously presented impressive outcomes with Frame Informativeness Assessment (FIA) algorithms, the current-state of the art implies sequential use of FIA and CAD systems, affecting the time performance of both algorithms. Since these algorithms are often trained on similar datasets, we hypothesise that part of the learned feature representations can be leveraged for both systems, enabling a more efficient implementation. This paper explores this case for early Barrett cancer detection by integrating the FIA algorithm within the CAD system. Sharing the weights between two tasks reduces the number of parameters from 16 to 11 million and the number of floating-point operations from 502 to 452 million. Due to the lower complexity of the architecture, the proposed model leads to inference time up to 2 times faster than the state-of-the-art sequential implementation while retaining the classification performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.