Here, we explore the potential benefits of extracting hemoglobin oxygenation metrics using multispectral imaging (MSI) in nailfold capillaroscopy for systemic sclerosis (SSc) patients. We used a nine-band multispectral camera to capture images of the nail bed from SSc patients (n=10) and healthy controls (n=12). Spectral analysis and machine learning classification were employed to examine systematic changes between healthy controls and SSc patients. The results demonstrate differences in spectra and promising accuracy in classification, with further work needed to extract oxygenation values and improve signal-to-noise ratio. MSI shows potential for improving sensitivity of nailfold capillaroscopy and detection of changes in early disease.
SignificanceThe capillaries are the smallest blood vessels in the body, typically imaged using video capillaroscopy to aid diagnosis of connective tissue diseases, such as systemic sclerosis. Video capillaroscopy allows visualization of morphological changes in the nailfold capillaries but does not provide any physiological information about the blood contained within the capillary network. Extracting parameters such as hemoglobin oxygenation could increase sensitivity for diagnosis and measurement of microvascular disease progression.AimTo design, construct, and test a low-cost multispectral imaging (MSI) system using light-emitting diode (LED) illumination to assess relative hemoglobin oxygenation in the nailfold capillaries.ApproachAn LED ring light was first designed and modeled. The ring light was fabricated using four commercially available LED colors and a custom-designed printed circuit board. The experimental system was characterized and results compared with the illumination model. A blood phantom with variable oxygenation was used to determine the feasibility of using the illumination-based MSI system for oximetry. Nailfold capillaries were then imaged in a healthy subject.ResultsThe illumination modeling results were in close agreement with the constructed system. Imaging of the blood phantom demonstrated sensitivity to changing hemoglobin oxygenation, which was in line with the spectral modeling of reflection. The morphological properties of the volunteer capillaries were comparable to those measured in current gold standard systems.ConclusionsLED-based illumination could be used as a low-cost approach to enable MSI of the nailfold capillaries to provide insight into the oxygenation of the blood contained within the capillary network.
Nailfold capillaroscopy is a technique for imaging the capillary bed in the finger nailfold, which is used in the diagnosis of scleroderma. Knowledge of the capillary oxygenation profile would be advantageous in disease evaluation due to the suspected involvement of hypoxia in causing fibrosis. A hyperspectral nailfold capillaroscopy system with narrowband illumination provided by a supercontinuum laser and acousto-optic tuneable filter was created to enable spectral analysis of the nailfold. Hyperspectral imaging (HSI) data from 500-595 nm was analysed to extract image quality metrics, which suggested HSI oximetry holds promise for understanding how rheumatic diseases affect oxygenation of the nailfold.
Nailfold capillaroscopy is a technique for imaging the capillary bed in the finger nailfold, that is used in the diagnosis of scleroderma. Knowledge of the capillary oxygenation profile would be a substantial advantage in disease evaluation. A compact, low-cost LED-illuminated capillaroscopy system was conceived based on inexpensive parts and optical hardware. The system uses a compact Raspberry Pi to control a custom-designed LED ring light, with white-light LEDs interleaved with three narrowband LEDs, and a Raspberry Pi camera. Capillary visualisation and distinction of haemoglobin contrast is demonstrated, suggesting future promise for application of multispectral nailfold capillaroscopy in low-resource settings.
Mammographic density is an important risk factor for breast cancer. In recent research, percentage density assessed visually using visual analogue scales (VAS) showed stronger risk prediction than existing automated density measures, suggesting readers may recognize relevant image features not yet captured by hand-crafted algorithms. With deep learning, it may be possible to encapsulate this knowledge in an automatic method. We have built convolutional neural networks (CNN) to predict density VAS scores from full-field digital mammograms. The CNNs are trained using whole-image mammograms, each labeled with the average VAS score of two independent readers. Each CNN learns a mapping between mammographic appearance and VAS score so that at test time, they can predict VAS score for an unseen image. Networks were trained using 67,520 mammographic images from 16,968 women and for model selection we used a dataset of 73,128 images. Two case-control sets of contralateral mammograms of screen detected cancers and prior images of women with cancers detected subsequently, matched to controls on age, menopausal status, parity, HRT and BMI, were used for evaluating performance on breast cancer prediction. In the case-control sets, odd ratios of cancer in the highest versus lowest quintile of percentage density were 2.49 (95% CI: 1.59 to 3.96) for screen-detected cancers and 4.16 (2.53 to 6.82) for priors, with matched concordance indices of 0.587 (0.542 to 0.627) and 0.616 (0.578 to 0.655), respectively. There was no significant difference between reader VAS and predicted VAS for the prior test set (likelihood ratio chi square, p = 0.134). Our fully automated method shows promising results for cancer risk prediction and is comparable with human performance.
Background: Mammographic density is an important risk factor for breast cancer. Recent research demonstrated that percentage density assessed visually using Visual Analogue Scales (VAS) showed stronger risk prediction than existing automated density measures, suggesting readers may recognise relevant image features not yet captured by automated methods.
Method: We have built convolutional neural networks (CNN) to predict VAS scores from full-field digital mammograms. The CNNs are trained using whole-image mammograms, each labelled with the average VAS score of two independent readers. They learn a mapping between mammographic appearance and VAS score so that at test time, they can predict VAS score for an unseen image. Networks were trained using 67520 mammographic images from 16968 women, and tested on a large dataset of 73128 images and case-control sets of contralateral mammograms of screen detected cancers and prior images of women with cancers detected subsequently, matched to controls on age, menopausal status, parity, HRT and BMI.
Results: Pearson's correlation coefficient between readers' and predicted VAS in the large dataset was 0.79 per mammogram and 0.83 per woman (averaging over all views). In the case-control sets, odds ratios of cancer in the highest vs lowest quintile of percentage density were 3.07 (95%CI: 1.97 - 4.77) for the screen detected cancers and 3.52 (2.22 - 5.58) for the priors, with matched concordance indices of 0.59 (0.55 - 0.64) and 0.61 (0.58 - 0.65) respectively.
Conclusion: Our fully automated method demonstrated encouraging results which compare well with existing methods, including VAS.
A recent study has shown that breast cancer risk can be reduced by taking Tamoxifen, but only if this results in at least a
10% point reduction in mammographic density. When mammographic density is quantified visually, it is impossible to
assess reader accuracy using clinical images as the ground truth is unknown.
Our aim was to compare three models of assessing density change and to determine reader accuracy in identifying
reductions of 10% points or more. We created 100 synthetic, mammogram-like images comprising 50 pairs designed to
simulate natural reduction in density within each pair. Model I: individual images were presented to readers and density
assessed. Model II: pairs of images were displayed together, with readers assessing density for each image. Model III:
pairs of images were displayed together, and readers asked whether there was at least a 10% point reduction in density.
Ten expert readers participated.
Readers' estimates of percentage density were significantly closer to the truth (6.8%-26.4%) when images were assessed
individually rather than in pairs (9.6%-29.8%). Measurement of change was significantly more accurate in Model II than
Model I (p<0.005). Detecting density changes of at least 10% points in image pairs, mean accuracy was significantly
(p<0.005) lower (58%-88%) when change was calculated from density assessments than in Model III (74%-92%).
Our results suggest that where readers need to identify change in density, images should be displayed alongside one
another. In our study, less accurate assessors performed better when asked directly about the magnitude of the change.
We address the problem of evaluating the performance of algorithms for detecting curvilinear structures in medical
images. As an exemplar we consider the detection of vessel trees which contain structures of variable width and contrast.
Results for the conventional approach to evaluation, in which the detector output is compared directly with a groundtruth
mask, tend to be dominated by the detection of large vessels and fail to capture adequately whether or not finer,
lower contrast vessels have been detected successfully. We propose and investigate three alternative evaluation
strategies. We demonstrate the use of the standard and new evaluation strategies to assess the performance of a novel
method for detecting vessels in retinograms, using the publicly available DRIVE database.
The quantity and appearance of dense breast tissue in mammograms is related to the risk of developing breast cancer, the
sensitivity of mammographic interpretation, and the likelihood of local recurrence of cancer following surgery. Visual
assessment of breast density is widely used, often with readers indicating the percentage of dense tissue in a
mammogram. Although real mammograms can be used to investigate intra- and inter-observer variability, ground truth
is difficult to ascertain, so to investigate reader accuracy, we created 60 synthetic, mammogram-like images with
densities comparable in area to those found in screening. The images contained either a single dense area, multiple or
linear densities, or a variable breast size with a single density. The images were randomized and assessed by 9 expert and
6 non-expert readers who marked percentage area of density on a visual analogue scale. Non-expert readers' estimates of
percentage area of density were closer to the truth (6-11% mean absolute difference) than the experts' estimates (10-
19%). The readers were most accurate when the density formed a single area in the image, and least accurate when the
dense area was composed of linear structures. In almost every case, the dense area was overestimated by the expert
readers. When experts were ranked according to the degree of overestimation, this broadly reflected their relative
performance on real mammograms.
Jenny Diffey, Michael Berks, Alan Hufton, Camilla Chung, Rosanne Verow, Joanna Morrison, Mary Wilson, Caroline Boggis, Julie Morris, Anthony Maxwell, Susan Astley
Breast density is positively linked to the risk of developing breast cancer. We have developed a semi-automated,
stepwedge-based method that has been applied to the mammograms of 1,289 women in the UK breast screening
programme to measure breast density by volume and area. 116 images were analysed by three independent operators to
assess inter-observer variability; 24 of these were analysed on 10 separate occasions by the same operator to determine
intra-observer variability. 168 separate images were analysed using the stepwedge method and by two radiologists who
independently estimated percentage breast density by area. There was little intra-observer variability in the stepwedge
method (average coefficients of variation 3.49% - 5.73%). There were significant differences in the volumes of glandular
tissue obtained by the three operators. This was attributed to variations in the operators' definition of the breast edge. For
fatty and dense breasts, there was good correlation between breast density assessed by the stepwedge method and the
radiologists. This was also observed between radiologists, despite significant inter-observer variation. Based on analysis
of thresholds used in the stepwedge method, radiologists' definition of a dense pixel is one in which the percentage of
glandular tissue is between 10 and 20% of the total thickness of tissue.
A method has been developed for generating synthetic masses that exhibit the appearance of real breast cancers in
mammograms. To be clinically useful, the synthetic masses must appear sufficiently realistic, even to expert
mammogram readers. This paper presents the results of an observer study in which 10 expert mammogram readers at the
Nightingale Centre, Manchester attempted to distinguish between real and synthetically generated masses. Each reader
rated a set of 30 real and 30 synthetics masses on a scale ranging from "definitely real" to "definitely synthetic". ROC
curves were fitted to their responses and the area-under-curve (AUC) used to quantify the ability of a reader to identify
synthetic masses. The mean AUC was 0.70±0.09, showing the readers were able to identify synthetic masses at a rate
statistically better than chance and suggesting that further improvements must be made to the mass synthesis method.
Analysis of individual AUC scores showed reader performance was not affected by job type (radiologist versus breast
physician/radiographer) or experience.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.