Quantitative image features that can be computed from medical images are proving to be valuable biomarkers of underlying cancer biology that can be used for assessing treatment response and predicting clinical outcomes. However, validation and eventual clinical implementation of these tools is challenging due to the absence of shared software algorithms, architectures, and the tools required for computing, comparing, evaluating, and disseminating predictive models. Similarly, researchers need to have programming expertise in order to complete these tasks. The quantitative image feature pipeline (QIFP) is an open-source, web-based, graphical user interface (GUI) of configurable quantitative image-processing pipelines for both planar (two-dimensional) and volumetric (three-dimensional) medical images. This allows researchers and clinicians a GUI-driven approach to process and analyze images, without having to write any software code. The QIFP allows users to upload a repository of linked imaging, segmentation, and clinical data or access publicly available datasets (e.g., The Cancer Imaging Archive) through direct links. Researchers have access to a library of file conversion, segmentation, quantitative image feature extraction, and machine learning algorithms. An interface is also provided to allow users to upload their own algorithms in Docker containers. The QIFP gives researchers the tools and infrastructure for the assessment and development of new imaging biomarkers and the ability to use them for single and multicenter clinical and virtual clinical trials.
We explore noninvasive biomarkers of microvascular invasion (mVI) in patients with hepatocellular carcinoma (HCC) using quantitative and semantic image features extracted from contrast-enhanced, triphasic computed tomography (CT). Under institutional review board approval, we selected 28 treatment-naive HCC patients who underwent surgical resection. Four radiologists independently selected and delineated tumor margins on three axial CT images and extracted computational features capturing tumor shape, image intensities, and texture. We also computed two types of “delta features,” defined as the absolute difference and the ratio computed from all pairs of imaging phases for each feature. 717 arterial, portal-venous, delayed single-phase, and delta-phase features were robust against interreader variability (concordance correlation≥0.8). An enhanced cross-validation analysis showed that combining robust single-phase and delta features in the arterial and venous phases identified mVI (AUC 0.76±0.18). Compared to a previously reported semantic feature signature (AUC 0.47 to 0.58), these features in our cohort showed only slight to moderate agreement (Cohen’s kappa range: 0.03 to 0.59). Though preliminary, quantitative analysis of image features in arterial and venous phases may be potential surrogate biomarkers for mVI in HCC. Further study in a larger cohort is warranted.
The purpose of this study is to investigate the utility of obtaining “core samples” of regions in CT volume scans for extraction of radiomic features. We asked four readers to outline tumors in three representative slices from each phase of multiphasic liver CT images taken from 29 patients (1128 segmentations) with hepatocellular carcinoma. Core samples were obtained by automatically tracing the maximal circle inscribed in the outlines. Image features describing the intensity, texture, shape, and margin were used to describe the segmented lesion. We calculated the intraclass correlation between the features extracted from the readers’ segmentations and their core samples to characterize robustness to segmentation between readers, and between human-based segmentation and core sampling. We conclude that despite the high interreader variability in manually delineating the tumor (average overlap of 43% across all readers), certain features such as intensity and texture features are robust to segmentation. More importantly, this same subset of features can be obtained from the core samples, providing as much information as detailed segmentation while being simpler and faster to obtain.
The lurking epidemic of eye diseases caused by diabetes and aging will put more than 130 million Americans at risk of
blindness by 2020. Screening has been touted as a means to prevent blindness by identifying those individuals at risk.
However, the cost of most of today's commercial retinal imaging devices makes their use economically impractical for
mass screening. Thus, low cost devices are needed. With these devices, low cost often comes at the expense of image
quality with high levels of noise and distortion hindering the clinical evaluation of those retinas.
A software-based super resolution (SR) reconstruction methodology that produces images with improved resolution and
quality from multiple low resolution (LR) observations is introduced. The LR images are taken with a low-cost Scanning
Laser Ophthalmoscope (SLO). The non-redundant information of these LR images is combined to produce a single
image in an implementation that also removes noise and imaging distortions while preserving fine blood vessels and
small lesions.
The feasibility of using the resulting SR images for screening of eye diseases was tested using quantitative and
qualitative assessments. Qualitatively, expert image readers evaluated their ability of detecting clinically significant
features on the SR images and compared their findings with those obtained from matching images of the same eyes
taken with commercially available high-end cameras. Quantitatively, measures of image quality were calculated from
SR images and compared to subject-matched images from a commercial fundus imager. Our results show that the SR
images have indeed enough quality and spatial detail for screening purposes.
Optic disc (OD) and fovea locations are two important anatomical landmarks in automated analysis of retinal disease in
color fundus photographs. This paper presents a new, fast, fully automatic optic disc and fovea localization algorithm
developed for diabetic retinopathy (DR) screening. The optic disc localization methodology comprises of two steps.
First, the OD location is identified using template matching and directional matched filter. To reduce false positives due
to bright areas of pathology, we exploit vessel characteristics inside the optic disc. The location of the fovea is estimated
as the point of lowest matched filter response within a search area determined by the optic disc location. Second, optic
disc segmentation is performed. Based on the detected optic disc location, a fast hybrid level-set algorithm which
combines the region information and edge gradient to drive the curve evolution is used to segment the optic disc
boundary. Extensive evaluation was performed on 1200 images (Messidor) composed of 540 images of healthy retinas,
431 images with DR but no risk of macular edema (ME), and 229 images with DR and risk of ME. The OD location
methodology obtained 98.3% success rate, while fovea location achieved 95% success rate. The average mean absolute
distance (MAD) between the OD segmentation algorithm and "gold standard" is 10.5% of estimated OD radius.
Qualitatively, 97% of the images achieved Excellent to Fair performance for OD segmentation. The segmentation
algorithm performs well even on blurred images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.