Multimodal computed tomography (CT) scans, including non-contrast CT (NCCT), CT Perfusion (CTP), and CT Angiography (CTA) are widely used in acute stroke diagnosis and treatment planning. While each imaging modality is for different visualization purposes such as anatomical structures and functional information, image quality is obtained variously. In this work, we aim at enhancing the image quality for all modalities by using deep learning technology. Through our experiments, we demonstrate that by using transfer learning and generative adversarial network, NCCT images are beneficial for CTP image reconstruction, and CTP images are helpful for CTA image quality enhancement.
Near-infrared (NIR) spectroscopic imaging of wounds has been performed by past researchers to obtain tissue oxygenation at discrete point locations. We had developed a near-infrared optical scanner (NIROS) that performs noncontact NIR spectroscopic (NIRS) imaging to provide 2D tissue oxygenation maps of the entire wounds. Regions of changed oxygenation have to be demarcated and registered with respect to visual white light images of the wound. Herein, a semi-automatic image segmentation and co-registration approach using machine learning has been developed to differentiate regions of changed tissue oxygenation. A registration technique was applied using a transformation matrix approach using specific markers across the white light image and the NIR images (or tissue oxygenation maps). This allowed for physiological changes observed from hemodynamic changes to be observed in the RGB white light image as well. Semi-automated segmentation techniques employing graph cuts algorithms was implemented to demarcate the 2D tissue oxygenation maps depicting regions of increased or decreased oxygenation and further coregistered onto the white light images. The developed registration technique was validated via phantom studies (both flat and curved phantoms) and in-vivo studies on controls, demonstrating an accuracy >97%. The technique was further implemented on wounds (here, diabetic foot ulcers) across weeks of treatment. Regions of decreased oxygenation were demarcated, and its area estimated and co-registered in comparison to the clinically demarcated wound area. Future work involves the development of automated machine learning approaches of image analysis for clinicians to obtain real-time co-registered clinical and subclinical assessments of the wound.
Lower extremity ulcers are one of the most common complications that not only affect many people around the world but also have huge impact on economy since a large amount of resources are spent for treatment and prevention of the diseases. Clinical studies have shown that reduction in the wound size of 40% within 4 weeks is an acceptable progress in the healing process. Quantification of the wound size plays a crucial role in assessing the extent of healing and determining the treatment process. To date, wound healing is visually inspected and the wound size is measured from surface images. The extent of wound healing internally may vary from the surface. A near-infrared (NIR) optical imaging approach has been developed for non-contact imaging of wounds internally and differentiating healing from non-healing wounds. Herein, quantitative wound size measurements from NIR and white light images are estimated using a graph cuts and region growing image segmentation algorithms. The extent of the wound healing from NIR imaging of lower extremity ulcers in diabetic subjects are quantified and compared across NIR and white light images. NIR imaging and wound size measurements can play a significant role in potentially predicting the extent of internal healing, thus allowing better treatment plans when implemented for periodic imaging in future.
In current computed tomography (CT) examinations, the associated
X-ray radiation dose is of significant concern to
patients and operators, especially CT perfusion (CTP) imaging that has higher radiation dose due to its cine scanning
technique. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs)
parameter as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably
increase data noise and degrade CT perfusion maps greatly if no adequate noise control is applied during image
reconstruction. To capture the essential dynamics of CT perfusion, a simple spatial-temporal Bayesian method that uses
a piecewise parametric model of the residual function is used, and then the model parameters are estimated from a
Bayesian formulation of prior smoothness constraints on perfusion parameters. From the fitted residual function, reliable
CTP parameter maps are obtained from low dose CT data. The merit of this scheme exists in the combination of
analytical piecewise residual function with Bayesian framework using a simpler prior spatial constrain for CT perfusion
application. On a dataset of 22 patients, this dynamic
spatial-temporal Bayesian model yielded an increase in
signal-tonoise-ratio (SNR) of 78% and a decrease in
mean-square-error (MSE) of 40% at low dose radiation of 43mA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.