PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12038, including the Title Page, Copyright information, Table of Contents, and Conference Committee listings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image-Guided Ultrasound Interventions: Joint Session with Conferences 12034 and 12038
High dose rate brachytherapy is a common procedure used in the treatment of gynecological cancers to irradiate malignant tumors while sparing the surrounding healthy tissue. While treatment may be delivered using a variety of applicator types, a hybrid technique consisting of an intracavitary applicator and interstitial needles allows for highly localized placement of the radioactive sources. To ensure an accurate and precise procedure, identification of the applicator and needle tips is necessary. The use of three-dimensional (3D) transrectal ultrasound (TRUS) and transabdominal ultrasound (TAUS) imaging has been previously investigated for the visualization of the intracavitary applicators. However, due to image artifacts from the applicator, needle tip identification is severely restricted when using a single 3D US view. To overcome this limitation and improve treatment outcome, we propose the use of image fusion to combine TRUS and TAUS images for the complete visualization of the applicator, needle tips, and surrounding anatomy. In this proof-of-concept work, we use a multimodality anthropomorphic pelvic phantom to assess the feasibility of image fusion and needle visualization using a hybrid brachytherapy applicator. We found that fused 3D US images resulted in accurate visualization of the pertinent structures when compared with magnetic resonance images. The results of this study demonstrate the future potential of image fusion in gynecological brachytherapy applications to ensure high treatment quality and reduce radiation dose to surrounding healthy tissue. This work is currently being expanded to other applicator types and is being applied to patients in a clinical trial.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Early detection of breast cancer has reduced mortality in women through the widespread implementation of screening mammography. However, challenges still exist in 40% of women with dense breasts, which reduces the mammographic sensitivity and detection of almost one-third of breast cancers. Automated breast (AB) ultrasound (US) has been proposed for screening women with dense breasts, enabling three-dimensional (3D) visualization, improved reproducibility, and reduction in operator dependence, compared to handheld US. However, ABUS systems require operator training for highquality image acquisition, experienced interpretation, and are costly. We propose an alternative, adaptable, and costeffective spatially tracked 3DUS system for automated whole-breast imaging. This paper describes the system design, optimization of 3D spatial tracking, multi-image registration and fusion of 3DUS acquired images in a tissue-mimicking breast phantom, and the first proof-of-concept healthy volunteer study. In the tissue-mimicking breast phantom, wholebreast 3DUS imaging and multi-planar visualization in axial, sagittal, and coronal views were demonstrated with the multiimage registration and fusion of acquired spatially tracked 3DUS images. The first clinical use of the spatially tracked 3DUS system was demonstrated in a healthy male and female volunteer study, showing high-resolution multi-image registration and fusion of two acquired 3DUS images. With an optimized acquisition protocol, our proposed spatially tracked system shows potential utility for automated whole-breast 3DUS imaging as a bedside point-of-care (POC) approach, toward improving widespread, accessible imaging in women with dense breasts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasound Beamforming, Signal Processing, and Novel Applications
Multiline transmit (MLT) imaging has been demonstrated to provide benefits in frame rate by sending sound in multiple directions during a single pulse-echo event, but at the expense of "cross-talk" artifacts. These artifacts arise from the fact that the multiple transmit beams have side lobes that overlap in space, as do receive focusing beams. These result in a degradation of contrast and target detectability in reconstructed images. Several solutions have been demonstrated to reduce these artifacts, including transmit/receive apodization, adaptive beamforming, and coherence processing. We demonstrate a new approach to reconstructing these data by recognizing the MLT acquisition sequence as a spatiotemporal encoding of the array data. Decoding of these data enable improved focusing by synthetic aperture methods and reduce cross-talk artifacts. Simulation and phantom experiments demonstrate improvements in lesion detectability and signal-to-noise ratio compared to conventional dynamic receive focusing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Isolating the mainlobe and sidelobe contribution to an ultrasound image can improve imaging contrast by removing sidelobe clutter. Previous work achieves the separation of mainlobe and sidelobe contributions based on the covariance of received signals. However, formation of a covariance matrix of receive signals at each imaging point can be computationally burdensome and memory intensive for real-time applications. This work demonstrates that the mainlobe and sidelobe contribution to the ultrasound image can be isolated based on the receive aperture spectrum, which greatly reduces computational and memory requirements. This aperture spectrumbased approach is shown to improve lesion contrast by16.5-41.2 dB beyond conventional delay-and-sum B-mode imaging, while the prior method based on the covariance model achieves 6.1 to 21.9 dB contrast improvement beyond conventional delay-and-sum.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The reverberant shear wave (RSW) technique offers a promising framework for elastography. In this study, to characterize fibrotic fatty livers at different fibrotic stages, we employed an autocorrelation (AC) estimator within the RSW framework to evaluate shear wave speed (SWS) of viscoelastic media. To this end, we utilized both simulation and experimental approaches and excited the RSW field in a medium within each approach at the frequency of 150 Hz: (i) the finite element (FE) simulation of a RSW field in a 3D model of a whole organ fatty liver and (ii) the RSW experiments on two castoroil- in-gelatin phantoms fabricated in the lab. In the FE simulations, to represent a more realistic liver model, a thin adipose fat layer and a muscle layer were added as viscoelastic power-law materials on top of the liver model. The SWS estimation from the RSW simulation was compared with predictions from the theory of composite media for verification. For the RSW experiments on phantoms, the SWS estimations were compared with the SWS results obtained from performing the stress relaxation test as an independent modality. The simulation results showed that the RSW-based AC estimator provides good estimates of SWS, within >90% accuracy compared with theory. Also, the RSW estimator results from the phantom experiments at different background stiffness levels provided some experimental support for the utility of the RSW estimator. These results demonstrated that the AC estimator is sensitive to the changes in viscoelastic properties of viscoelastic media.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A promising candidate for improved imaging of breast cancer is ultrasound tomography (USCT). To make full use of the 3D interaction of the ultrasound fields with the breast, we are focusing our research on full 3D USCT systems. While our previous 3DUSCTII device allowed nearly unfocussed emission and reception with approx. 600 emitters and 1400 receivers, the spatial sampling of the object is very sparse. In order to improve contrast in a sparse system, we realized an optimized pseudo-randomly sampled USCT device (3DUSCTIII) with approx. 2300 transducers. Additionally, the opening angle, the bandwidth and the active area of the transducers were improved. New front-end electronics with custom ASICs allow bidirectional operation of the transducers to acquire approx. six times more A-scans at one data acquisition step. This paper presents the setup of the new system and initial results acquired during the ongoing commissioning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Full-waveform inversion (FWI) for ultrasound computed tomography is an advanced method to provide quantitative and high-resolution images of tissue properties. Two main reasons hindering the widespread adoption of FWI in clinical practice are (1) its high computational cost and (2) the requirement of a good initial model to mitigate the non-convexity of the inverse problem. The latter is commonly referred to as “cycle-skipping", which occurs for phase differences between synthetic and observed signals and usually traps the inversion in a local minimum. Source-encoding strategies, which simultaneously activate several emitters and have been proposed to reduce the simulation cost, further contribute to this issue due to the multiple arrivals of the wavefronts. We present a time-domain acoustic full-waveform inversion strategy utilizing a recently proposed misfit functional based on optimal transport. Using a graph-space formulation, the discrepancy between simulated and observed signals can be computed efficiently by solving an auxiliary linear program. This approach alleviates the common need for either a good initial model and / or low-frequency data. Furthermore, combining this misfit functional with random source-encoding and a stochastic trust-region method significantly reduces the computational cost per FWI iteration. In-silico examples using a numerical phantom for breast screening ultrasound tomography demonstrate the ability of the proposed inversion strategy to converge to the ground truth even when starting from a weak prior and cycle-skipped data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Characterizing the spatial resolution and uncertainties related to a tomographic reconstruction are crucial to assess its quality and to assist with the decision-making process. Bayesian inference provides a general framework to compute conditional probability density functions of the model space. However, analytic expressions and closed-form solutions for the posterior probability density are limited to linear inverse problems such as straight- ray tomography under the assumption of a Gaussian prior and data noise. Resolution analysis and uncertainty quantification is significantly more complicated for non-linear inverse problems such as full-waveform inversion (FWI), and sampling-based approaches such as Markov-Chain Monte-Carlo are often impractical because of their tremendous computational cost. However, under the assumption of Gaussian priors in model and data space, we can exploit the machinery of linear resolution analysis and find a Gaussian approximation of the posterior probability density by using the Hessian of the regularized objective functional. This non-linear resolution analysis rests on (i) a quadratic approximation of the misfit functional in the vicinity of an optimal model; (ii) the idea that an approximation of the Hessian can be built efficiently by gradient information from a set of perturbed models around the optimal model. The inverse of the preconditioned Hessian serves as a proxy of the posterior covariance from which space-dependent uncertainties as well as correlations between parameters and inter-parameter trade-offs can be extracted. Moreover, the framework proposed here also allows for inter- comparison between different tomographic techniques. Specifically, we aim for a comparison between tissue models obtained from ray tomography and models obtained with FWI using ultrasound data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a three-dimensional ultrasound computed tomography (3D USCT) system, system errors such as transducer delay, transducer position deviation and temperature error will affect the quality of reconstructed images. Most of the existing calibration works use iterative methods to solve large-scale systems of linear equations. In our case, the transducer delay and position deviation calibration problem of the considered 3D USCT system is essentially to solve a linear system containing about 840,000 equations and 11,500 unknowns. For such a large system, the existing iterative methods require a lot of computation time and the accuracy also needs to be improved. Considering that neural networks have the ability to find optimized solutions for large-scale linear systems, we propose a neural network method for transducer delay and position deviation calibration. We designed a neural network to calibrate both delay and position solutions, together during the network training. We test the method with simulated system data where we add transducer delays in the range of 0.7~1.3 μs, position deviation in the range of -1~1 mm for the X- and Y-axis, and -0.3~0.3 mm for the Z-axis. Results show that the mean delay error is reduced to 0.15 μs, and the mean position error is reduced to 0.15 mm, after a neural network calibration process which takes about 11 minutes. The delay calibration result is better than the existing Newton method in literature, while our method is especially less time-consuming.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Microbubble (MB) tracking is an integral part of super-resolution ultrasound imaging by providing sharper images and enabling velocity estimation. Tracking the MBs from the last to the first frame can generate different trajectories than tracking from the first to the last frame, when the next positions of a track depends on its previous positions, e.g., in Kalman-based methods. Our hypothesis is that tracking in a forward-backward manner can increase the overall tracking performance. In simulations, MB positions with a parabolic flow profile were generated inside two tubes. Three different tracking methods, including nearest-neighbor, Kalman, and hierarchical Kalman, were investigated. Using the proposed forward-backward strategy, all estimated velocity profiles for all trackers were improved and were closer to the actual velocity profiles with an improvement between 28% to 40% in the relative standard deviation (RSD) of the velocity values over 10 cross-sections of the tubes. A Sprague Dawley rat kidney was scanned for 10 minutes using a BK5000 scanner and X18L5s transducer, which is a linear array probe with 150 elements. The tracking results from the in vivo experiments showed that the combined image of the forward and backward tracks had 35% additional unique track positions. It showed a clear visual enhancement in the super-resolved velocity map. Overall, the improvement in visual aspects and velocity estimates suggest forward-backward strategy as an upgrade for Kalman-based trackers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we address the problem of tissue motion compensation in blood flow estimation from ultrafast Doppler sequences. The goal is to improve the estimation of tumors blood flow, offering neurosurgeons better visualization of this flow and thereby leading them to make better decisions while performing brain surgery. This can be achieved by solving the problem of separation of blood flow and tissue in ultrasound images. To solve this problem, we focus on a recently developed variant of the Robust Principal Component Analysis (RPCA)- based method by embedding a deconvolution step into the algorithm in order to improve the resolution of the reconstructed blood flow. However, this approach is prone to failure in the presence of tissue motion. In this work, we propose to overcome this limitation by incorporating a motion compensation step into the above RPCA-based method. We implement and quantitatively compare motion compensation algorithms based on the Lucas-Kanade and Demon registration methods on simulation data. We show, using both simulation and preliminary in-vivo data, that a motion compensation step allows to improve the perception of thin vascular vessels and to reduce the amount of noise on the estimated flow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Super resolution (SR) imaging is currently conducted using fragile ultrasound contrast agents. This precludes using the full acoustic pressure range, and the distribution of bubbles has to be sparse for them to be isolated for SR imaging. Images have to be acquired over minutes to accumulate enough positions for visualizing the vasculature. A new method for SUper Resolution imaging using the Erythrocytes (SURE) as targets is introduced, which makes it possible to maximize the emitted pressure for good signal-to-noise ratios. The abundant number of erythrocyte targets make acquisition fast, and the SURE images can be acquired in seconds. A Verasonics Vantage 256 scanner was used in combination with a GE L8-18iD linear array probe operated at 10 MHz for a wavelength of 150 μm. A 12 emissions synthetic aperture ultrasound sequence was employed to scan the kidney of a Sprague-Dawley rat for 24 seconds to visualize its vasculature. An ex vivo micro-CT image using the contrast agent Microfil was also acquired at a voxel size of 22.6 μm for validating the SURE images. The SURE image revealed vessels with a size down to 29 μm, five times smaller than the ultrasound wavelength, and the dense grid of vessels in the full kidney was reliably shown for scan times between 1 to 24 seconds. Visually the SURE images revealed the same vasculature as the micro-CT images. SURE images are acquired in seconds rather than minutes without contrast injection for easy clinical use, and they can be measured at full regulatory levels for pressure, intensity, and probe temperature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Power-Doppler ultrasound (PD-US) imaging without contrast enhancement is being developed to routinely monitor for changes in blood perfusion. Although PD-US methods do not measure perfusion quantitatively, they can reliably indicate spatiotemporal variations in muscle perfusion once the clutter and noise power are sufficiently minimized. This paper explores a spatial registration method that is applied to echo signals prior to principal components analysis (PCA)-based clutter and noise filtering. The goal is to achieve PD-US images that predictably map relative perfusion. We use primarily echo-signal simulations to demonstrate sub-sample spatial registration of echo frames prior to clutter filtering over a range of tissue motion seen clinically. Registration narrows the eigen-spectrum of the tissue clutter component to a point where PCA filters are highly efficient at eliminating clutter power. However, the ability of the clutter filter to pass blood-signal power depends on the spatial patterns of blood cell movement in tissues. Prior in vivo studies have shown that symmetric Doppler spectra are most commonly observed for peripheral perfusion data. Symmetric spectra indicate nondirectional or diffuse perfusion patterns for which PD-US methods predictably pass 30-50% of the true blood-signal power. Given the unique features of peripheral perfusion imaging, spatial registration methods can significantly improve the reliability of PD-US imaging to represent tissue perfusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the integral parts of super-resolution ultrasound imaging (SRI) is particle tracking. This paper presents tracking for a new approach for SUper Resolution ultrasound imaging using Erythrocytes (SURE), which uses the erythrocytes as the target instead of fragile microbubbles. The acquisition of the SURE data can be accomplished in seconds due to the abundance of erythrocytes as targets. The nearest-neighbor (NN) algorithm was used to track erythrocytes. The erythrocyte targets were tracked to create SURE intensity map by three NN trackers with a constraint on the maximum velocities of 20, 40, 80 mm/s. By combining the outputs of three trackers, and inserting them into one map, and also using an image fusion method based on discrete wavelet transform for fusion the intensity maps, it was demonstrated that the combination of trajectories from different velocities and fusion of intensity maps carried more information from all the maps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pancreatic Cancer (PC) is one of the most aggressive cancers, with a mortality rate of 98%. Although the diagnosis of PC is difficult in early stages, several imaging techniques support the screening process, i.e, ultra- sonography (US), computed tomography (CT), and endoscopic ultrasound (EUS). EUS procedure reports the highest sensitivity (up to 87%) and histological samples may be acquired during the same procedure. However, EUS sensitivity depends on the gastroenterologist's experience. The presented method performs an automatic frame-by-frame detection of PC in complete EUS videos. First, the images are preprocessed to rearrange the radial image intensities, filter out the Speckle Noise, and perform a contrast enhancement to highlight relevant echo patterns. Then, a pre-trained Convolutional Neural Network (CNN) is adapted to the ultrasound domain by a transfer learning strategy to characterize and classify EUS images between PC and non-PC classes. Finally, mislabeled images are corrected by a temporal analysis. The methodology is evaluated using a data set of 66,249 frames from 55 EUS cases. 18 patients are from PC class and 37 for non-P class. A cross-validation scheme is applied seven times to evaluate the performance of three convolutional neural networks: GoogleNet, ResNet18, and ResNet50 architectures. Best results were 93:2 ± 4:0, 87:7 ± 5:4, 95:0 ± 5:6, and 87:0 ± 6:7 in accuracy, sensitivity, specificity, and F-score, achieved with the ResNet50 architecture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
First carpometacarpal osteoarthritis (CMC1) is one of the most prominent forms of hand osteoarthritis (OA) with an estimated prevalence of up to 33%. X-ray radiography is the most common imaging modality used in the diagnosis of CMC1 OA. However, studies have reported significant discrepancies between patient-related outcomes and radiographic evidence of OA, which may be attributed to the lack of x-ray soft tissue contrast. Therefore, in conjunction with rapidly expanding soft tissue modalities such as magnetic resonance imaging (MRI) and ultrasound (US) there has been an increased interest in the role that soft tissue structures, such as joint synovium plays in the progression of OA. US and MRI are excellent for imaging soft tissue structures; however, US is highly operator-dependent and is inherently 2D, while MRI is associated with long waitlist times, high operating costs, and inaccessibility to patients with mobility impairments. Three-dimensional (3D) US technology may overcome these limitations by providing a method for bedside monitoring OA progression and treatment response. This paper validates the use of our developed 3DUS device for comparing synovial volume measurements in a CMC OA patient study. The CMC1 patient was imaged to compare the measurement capabilities of 3DUS to MRI for measuring synovial tissue volumes. Results showed that there was a 29.61% difference between acquired MRI and 3DUS volumetric measurements. Furthermore, the coefficient of variance was 1.4% and 5.7% for 3DUS and MRI, respectively, suggesting that the volumetric measurements from the 3DUS image were more consistent and better suited for clinical imaging and assessment of the CMC1 joint.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The segmentation of cardiac boundary on echocardiographic images can provide important information for cardiac diagnostic and functional assessment. Due to the low image quality, traditional segmentation methods are subject to large performance variation and low segmentation accuracy. Manual segmentation, on the other hand, is slow, laborious, and observer-dependent, making it not suitable for real-time image-guided interventions. In this study, we developed a novel deep learning-based method to segment the myocardium, endocardium of the left ventricle and left atrium rapidly and accurately. The proposed method, named mutual boosting network, consists of three modules, i.e., localization module (L module), classification module (C module) and segmentation module (S module). The L module is used to detect the region-of-interests (ROIs) of cardiac substructures. The C module and S modules can then derive the classification and segmentation of each substructure within its respective detected ROIs. We conducted a five-fold cross-validation on 100 patients’ cases. The endocardium (LVEndo) and epicardium (LVEpi) of the left ventricle and left atrium (LA) were segmented using the proposed method. The segmentation accuracy was evaluated using the Dice similarity coefficient (DSC) and mean absolute distance (MAD). At the end diastole (ED) phase, the DSC and MAD are 0.94±0.02 and 0.05±0.05mm for the LVEndo, 0.95±0.02 and 0.04±0.05mm for the LVEpi, and 0.88±0.09 and 0.25±1.3mm for the LA. At the end systole (ES) phase, the DSC and MAD are 0.93±0.04 and 0.07±0.15mm for the LVEndo, 0.95±0.03 and 0.05±0.1mm for the LVEpi, and 0.90±0.06 and 0.14±0.64mm for the LA. The high DSC and sub-milimeter MAD values demonstrate the potential of the proposed method in myocardial functions assessment and real-time interventional image guidance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasound contrast agents (UCA) are gas encapsulated microspheres that oscillate volumetrically when exposed to an ultrasound field producing a backscattered signal which can be used for improved ultrasound imaging and drug delivery. UCA’s are being used widely for contrast-enhanced ultrasound imaging, but there is a need for improved UCAs to develop faster and more accurate contrast agent detection algorithms. Recently, we introduced a new class of lipid based UCAs called Chemically Cross-linked Microbubble Clusters (CCMCs). CCMCs are formed by the physical tethering of individual lipid microbubbles into a larger aggregate cluster. The advantages of these novel CCMCs are their ability to fuse together when exposed to low intensity pulsed ultrasound (US), potentially generating unique acoustic signatures that can enable better contrast agent detection. In this study, our main objective is to demonstrate that the acoustic response of CCMCs is unique and distinct when compared to individual UCAs using deep learning algorithms. Acoustic characterization of CCMCs and individual bubbles was performed using a broadband hydrophone or a clinical transducer attached to a Verasonics Vantage 256. A simple artificial neural network (ANN) was trained and used to classify raw 1D RF ultrasound data as either from CCMC or non-tethered individual bubble populations of UCAs. The ANN was able to classify CCMCs at an accuracy of 93.8% for data collected from broadband hydrophone and 90% for data collected using Verasonics with a clinical transducer. The results obtained suggest the acoustic response of CCMCs is unique and has the potential to be used in developing a novel contrast agent detection technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic breast ultrasound (ABUS) imaging is a well-established tool in breast cancer diagnosis. Delineating lesion on ABUS is an essential step for breast cancer computer-aided diagnosis (CAD). This work aims to develop an automated deep learning-based method for breast tumor segmentation on three-dimensional ABUS. The proposed method, one-stage hierarchical target activation network, consists of three subnetworks, i.e., fully convolutional one-state object detector (FCOS), hierarchical block and mask module. Feature extractor is used to extract informative features from ABUS. FCOS is used to locate the volume-of-interest (VOIs) of breast tumor. Hierarchical block is used to enhance the feature contrast around tumor boundary. Mask module then segment tumor from the refined feature map within the VOIs. A five-fold cross-validation on 40 patients’ cases was conducted. The ABUS breast tumors were segmented and compared with manual contours using several segmentation measurements. The Dice similarity coefficient (DSC) and mean surface distance (HD95) is 0.855±0.090 and 1.56±1.02 mm, respectively. These results demonstrate the feasibility and efficacy of the proposed method for breast tumor segmentation, which can further facilitate CAD for breast cancer using 3D ABUS imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Shadow artefacts in Ultrasound make clinical interpretation of the image difficult and even impossible in certain scenarios. Shadow detection and avoidance is therefore a very important feature for automatic interpretation Ultrasound images. Deep Learning (DL) based methods for automatic shadow detection have approached it as a segmentation problem achieving limited accuracy. Since that acoustic shadows appear along the acquisition path, we propose a novel approach of extracting slivers of images called fanlets along the acquisition path and employ a simpler classification approach to detect presence of shadows. Limiting the spatial context for shadow detection helps us to achieve a very high accuracy of 97%. On a database of abdominal ultrasound videos from 128 subjects, we show that our approach is superior to UNet based shadow segmentation. Since any Ultrasound image can be broken into a series of fanlets, our approach can be readily applied to a wide variety of acquisitions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Shear wave elastography involves applying a non-invasive acoustic radiation force to the tissue and imaging the induced deformation to infer its mechanical properties. This work investigates the use of convolutional neural networks to improve displacement estimation accuracy in shear wave imaging. Our training approach is completely unsupervised, which allows to learn the estimation of the induced micro-scale deformations without ground truth labels. We also present an ultrasound simulation dataset where the shear wave propagation has been simulated via finite element method. Our dataset is made publicly available along with this paper, and consists in 150 shear wave propagation simulations in both homogenous and hetegeneous media, which represents a total of 20,000 ultrasound images. We assessed the ability of our learning-based approach to characterise tissue elastic properties (i.e., Young's modulus) on our dataset and compared our results with a classical normalised cross-correlation approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Motion tracking aims to accurately localize the moving lesion during radiotherapy to ensure the accuracy of radiation delivery. Ultrasound (US) imaging is a promising imaging modality to guide radiation therapy in real time. This study proposed a deep learning-based motion tracking method to track the moving lesion in US images. To reduce the searching region, a box regression-based method is adopted to predefine a region of interest (ROI). Within the ROI, the feature pyramid network (FPN) that uses a top-down architecture with lateral connection was adopted to extract image features, and the region proposal network (RPN) that learns the attention mechanism of the annotated anatomical landmarks was then used to yield a number of proposals. The training of the networks was supervised by three training objectives, including a bounding box regression loss, a proposal classification loss and a classification loss. In addition, we employed long-short-term-memory (LSTM) to capture the temporal features from the US image sequence. The weights from transform learning were used as the initial values of our network. Two-dimensional liver US images from 24 patients and the corresponding annotated anatomical landmarks were used to train our proposed method. In the testing experiments on 11 patients, our method achieves a mean tracking error of 0.58 mm with a standard derivation of 0.44 mm in a temporal resolution of 69 frames per second. Our proposed method provides an effective and clinically feasible solution to monitor the lesion motion in radiation therapy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Breast cancer is the most common cancer for women worldwide. 3D Ultrasound Computed Tomography (3D USCT) is a novel imaging method for early breast cancer diagnosis, which allows reconstruction of quantitative tissue parameters like speed of sound and attenuation. For reconstruction we use the paraxial approximation of the Helmholtz equation as forward model. We have realized the forward solution, backprojection and reconstruction for a ring transducer arrangement. The reconstruction software was evaluated with data simulated with k-Wave, resulting in the mean error for the speed of sound map of 12.6 m/s for a pixel size of 0.3 mm. Spatial resolution was estimated with a resolution phantom containing circular inclusions with realistic speed of sound values for breast tissues, allowing maximum resolution of 2 mm. In this paper we show that our method has accurate forward solution, we present the new backprojection technique and initial results of reconstructing simulated data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We aim to construct the signal between two points inside a model of the human skull as if there were virtual transducers located inside the skull. We show how this can be achieved through the use of time-reversed Green's functions that are measured on opposite sides of the skull. We then demonstrate how to achieve similar results using special wavefields named focusing functions, which are designed to work specifically when injected from a single-side of the medium of interest. We show two ways to obtain these focusing functions, through the use of the iterative Marchenko method and by inversion of a measured Green's function. The inversion of the Green's function shows potential benefits over the use of the Marchenko method, however, this approach requires further studies. We demonstrate how these wavefields function on 2D acoustic in silico data by injecting the time-reversed Green's functions and focusing functions into the model. We also demonstrate how the response between virtual transducers can be obtained directly through use of the homogeneous Green's function representation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present full-waveform ultrasound computed tomography (USCT) for sound speed reconstruction based on the angular spectrum method using linear transducer arrays. We first present a transmission scenario in which plane-waves are emitted by a transmitting array and received by an array on the opposite side of the object of interest. These arrays are rotated around the object of interest to interrogate the medium from di↵erent view angles. Waveform inversion reconstruction is demonstrated on a numerical breast phantom, in which sound speed is varied from 1486 to 1584 m/s. This example is used to isolate and examine the impact of each view angles and frequency used in the reconstruction process. We also examine cycle-skipping artifacts as well as optimization schemes that can be used to overcome them. The goal of this work is to provide an opensource example and implementation of the waveform inversion reconstruction algorithm on Github: https:// github.com/rehmanali1994/FullWaveformInversionUSCT (DOI: 10.5281/zenodo.4774394). Next, we extend the waveform inversion framework to perform sound speed tomography for pulse-echo ultrasound imaging with a single linear array that transmits pulsed waves and receives signals backscattered from the medium. We first demonstrate that B-mode image reconstructions can be achieved using the angular spectrum method; then, we derive an optimization framework for estimating the sound speed in the medium by optimizing B-mode images with respect to slowness, via the angular spectrum method. We demonstrate an initial proof of concept with point targets in a homogeneous medium to demonstrate the fundamental principles of this new technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasound computed tomography (USCT) is an emerging imaging modality that holds great promise for breast cancer diagnosis. Full-waveform inversion (FWI)-based image reconstruction methods can produce high spatial resolution images of the acoustic properties of the breast tissues. A practical design of breast USCT systems employs a circular elevation-focused ring-array of ultrasonic transducers, where the data is acquired by translating the ring-array orthogonally to the imaging plane. This design allows a fast slice-by-slice (SBS) reconstruction approach by stacking together two-dimensional (2D) images reconstructed for each position of the transducer array. However, the SBS approach assumes 2D propagation physics and usually employs simplified transducer models. Because of this, the reconstructed images can sometimes contain significant artifacts. Thus, threedimensional (3D) imaging models that incorporate the focusing effects of transducers are needed to improve image quality in ring-array-based USCT. To address this, a 3D elevation-focused transducers model for use within a 3D time-domain pseudospectral wave propagation method was developed. The proposed method uses a stylized (line source) geometry of the transducer and achieves elevational focusing by applying a spatially varying delay to the ultrasound pulse (emitter mode) and recorded signal (receiver mode). The proposed transducer model was validated against a semi-analytical method based on the Rayleigh-Sommerfield diffraction integral solution. In addition, a virtual imaging study that utilizes a 3D anatomical numerical breast phantom and the proposed transducer model is presented. The results demonstrate that 3D FWI-based reconstruction methods that incorporate the proposed transducer model hold promise for improving image quality in ring-array-based USCT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quantitative images showing the speed of sound profile of the breast may be obtained by employing full-waveform inversion (FWI) methods on the measured data. These reconstruction methods work well for both dense and normal breasts. Contrast source inversion (CSI) is a frequency domain FWI method. In literature, many examples of successful application of CSI for breast imaging can be found. However, all these works are based on simulated data. In this work, we will present our first results obtained with employing CSI on experimental data. CSI was developed by Delft University of Technology and the experimental data was provided by FUJIFILM Healthcare Corporation. The experimental data is obtained using a ring-shaped transducer which scans a breast-mimicking gelatine phantom. Our initial results obtained with CSI look promising; all inclusions within the phantom are accurately reconstructed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Refracted ray-path model in transmission-based ultrasound tomography (USCT) is a promising medical imaging technique. Although, most reconstructing methods for this technique fall into ill-conditioned problems that require prior information and regularization for solving, in this case choosing the regularization parameter by guessing and testing might be a time consuming task. The current report intends to present and test the robustness against measurement noise of a statistical estimation of Tikhonov regularization factor needed in refraction-corrected sound speed distribution in transmission-based USCT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We developed a 3D ultrasound biomicroscopy (3D-UBM) imaging system and used it to assess ciliary tissues in the eye. As ultrasound can penetrate opaque ocular tissues, 3D-UBM has a unique ability to creating informative 3D visualization of anterior ocular structures not visible with other, optical imaging modalities. Ciliary body, located behind the iris, is responsible for fluid production making it an important ocular structure for glaucoma. Only 3DUBM allows visualization and measurements of ciliary body. Several steps were required for visualization and quantitative assessment. To reduce eye motion in 3D-UBM volumes, we performed slice alignment using Transformation Diffusion approach to avoid geometric artifacts. We applied noise reduction and aligned the volumes to the optic axis to create 3D renderings of ciliary body in its entirety. We extracted two different sets of images from these volumes, namely en face and radial images. We created a dataset of eye volumes with slices containing ciliary body, segmented by two analyst trainees and approved by two experts. Deep learning segmentation models (UNet and Inception-v3+) were trained on both sets of images using appropriate loss functions. Using en face images and Inception-v3+, and weighted cross entropy loss, we obtained Dice = 0.81±0.04. Using radial images, Inception-v3+, and with Dice loss, results were improved to Dice = 0.89±0.03, probably because radial images enable full usage of the symmetry of the eye. Cyclophotocoagulation (CPC) is a glaucoma treatment that is used to destroy the ciliary body partially or completely and reduce fluid production. 3D-UBM allows one to visualize and quantitatively analyze CPC treatments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photoacoustic imaging (PAI) has the potential to detect cancer in the early stage. PAI is safe due to its non-ionizing radiation properties, which greatly enhance its clinical feasibility in the near future, which provides significant benefits over other imaging techniques like X-ray computed tomography (CT). In this paper, the fully automated 3D deep learning cancer detector is taken to detect and localize the presence of cancer in freshly excised ex-vivo human thyroid and prostate tissue specimens using a three-dimensional (3D) multispectral photoacoustic (MPA) dataset automatically. The model detected and localized the cancer region in a given test MPA image with promising results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasound image quality is strongly dependent on penetration depth and attenuation. The transmit voltage in ultrasonic systems can be increased to increase output power and improve the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) in deeper regions. However, the utility of using high transmit voltages and thus high output power is limited due to associated thermal and mechanical bioeffects. Additionally, the ability to increase the output power is limited in portable and low-cost ultrasound devices which have lower power. We propose a software-based approach, using a conditional generative adversarial network (cGAN) to amplify signals in deeper regions and enhance image quality without increasing the transmit voltage. The cGAN was customized and trained with beamformed radio frequency phantom data pairs (n=288) acquired with a Verasonics Vantage System; with input data taken at a low output voltage (20V) and corresponding output data taken at a high output voltage (70V). We trained and tested the performance of different loss functions and cGAN architectures. Our proposed model, tested on a hold-out phantom data set (n=73) was able to improve the average penetration depth by roughly 16% (1 cm gain in penetration depth) when compared to the low-voltage images. We found an average increase in CNR of 160.45%±117.64, increase in Peak SNR of 5675%±1.89dB, and increase in SNR of 32.68%±4.84 dB, relative to the low voltage images, for selected hyper and hypoechoic regions of interest. This work has potential applications in fetal imaging, where safety guidelines limit transmit voltage significantly, and portable devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.