PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12470, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There has been an inherent compromise between the spatial resolution and penetration depth in ultrasound imaging. Optical super-resolution, through e.g. photo-activated localization microscopy (PALM), has overcome the diffraction limit of spatial resolution, although its penetration is limited to in vitro applications. Super-resolution ultrasound (SRUS), particularly through localizing spatially isolated individual microbubbles, is shown to be able to break the wave diffraction limit and generate microscopic resolution at centi-meter depth, offering great promises for a wide range of clinical applications. In the talk I would introduce the principles of super-resolution ultrasound through localisation and tracking, and our efforts in technical developments to address some of the current challenges in SRUS and exploring its clinical applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasound Tomography (USCT) is an emerging technology for early breast cancer detection. At Karlsruhe Institute of Technology we recently realized a new generation of full 3D USCT device with a pseudo-randomly sampled hemispherical aperture. In this paper we summarize first imaging results with phantoms and first volunteer images. Using a gelatin phantom with PVC inclusions we evaluated transmission imaging, which showed a deviation from the ground truth of less than 5 m/s in the sound speed and 0:2 dB/cm in the attenuation for the phantom body and less than 15 m/s and 0:2 dB/cm for an inclusion with a diameter of 2:2 cm. Geometric errors are in average in the range of 0:2 cm. For reflectivity imaging we showed that the point spread function is nearly isotropic and with an average of 0:26 mm close to the theoretical predictions for the current system. While the system is still in final commissioning, the results of the phantom and volunteer imaging are very promising: after further calibration and deeper analysis with phantoms we aim at starting a clinical study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With increasing evidence for supplemental ultrasound (US) for breast cancer screening in women with dense breasts, there is an interest in developing more robust and cost-effective techniques. Compared with handheld US, automated breast ultrasound (ABUS) shows improvements in detection, reproducibility, and operator dependence. However, limitations exist as high-quality image acquisition is still reliant on operator training and patient positioning. Moreover, installation of current commercial systems is expensive, and they lack point-of-care capabilities, limiting their bedside utility. We developed a dedicated three-dimensional (3D) ABUS device that contains a wearable patient-conforming 3D-printed dam, compression assembly, and motorized 3DUS scanner. Acquisition involves acquiring 2DUS images at a fixed spatial interval and reconstructing them into a 3DUS image. While the 3DUS image has a high in-plane resolution, its out-ofplane (elevational) US resolution in the reconstruction plane is poor. We hypothesize that combining orthogonal images can improve 3DUS image resolution by recovering some out-of-plane resolution. With orthogonal 3DUS images occupying the same volume, the intensity at any 3D voxel coordinate can be computed from a spherical-weighted function of the voxel intensities from the two original 3DUS images. In this paper, we describe the dedicated 3D ABUS device, its orthogonal acquisition, and the combination approach for creating a 3D complementary breast ultrasound (CBUS) image. We perform experiments to evaluate their impact on 3D image resolution. The proposed CBUS method was evaluated with orthogonally acquired craniocaudal and mediolateral 3DUS images of an angular wire phantom, then calculating the full width at half maximum (FWHM) of the line spread function for each wire. Our results show that 3D CBUS images with orthogonal 3DUS images improves resolution uniformity by recovering some out-of-plane resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High resolution ultrasound time-reversal imaging typically requires adequate signal strength. The multiple signal classification (TR-MUSIC) algorithm can produce images of point scatterers with subwavelength resolution when a clear separation of the signal space and the noise space exists. When the size of the scatterers is on the order of the wavelength or larger, the TR-MUSIC algorithm suffers from poor image quality due to a tangled eigenstate spectrum. In this study, we use a coherent broadband white noise constraint (B-WNC) matched-field processor that requires zero knowledge of the eigenspace in the interrogated medium to obtain high quality ultrasound images. The WNC algorithm enhances the robustness to model mismatch of matched-field beamformers in poor signal conditions. The multi-tone broadband beamformer offers additional gain over a single-tone by augmenting the dimension of the physical array and exploiting the cross-frequency terms in the time reversal operator. The dynamic range bias obtained from a rank-deficient covariance matrix benefits the B-WNC images to retain better contrast than the TR-MUSIC images. This study also proposes improvements in modeling the replica vectors for the virtual time-reversal process. The transverse mode is combined with the longitudinal mode in the formulation using the free-field Green’s function. The resolution is further improved by exploiting the transverse mode information in the multistatic data. Adaptive spatial windows are applied to the replica vectors according to the displacement structure of the wave mode at medium-array interface. Numerical simulations and experimental testing demonstrate the potential for accurate sizing of extended targets that have a size comparable to the dominant wavelength using the proposed B-WNC algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the correlation-based ultrasound imaging method called Excitelet is used in conjunction with a new Phase Coherence (PC) metric for medical imaging applications. In this shown that improved lateral resolution and a reduction of imaging artifacts are obtained when compared with Synthetic Aperture Focusing Technique (SAFT). Moreover, the phase content of the reflectivity map can be overlaid to the ultrasound image and allows distinguishing reflectors with different mechanical properties. This novel approach shows great potential for the imaging of specular reflectors and is supported with numerical and experimental imaging results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a novel way to exploit the correlation-based (CB) imaging algorithm Excitelet, based on correlation of measured signals with reference signals. In this study, reference signals are approximated using an experimental baseline instead of computing it. The reference signals are obtained by experimentally measuring the Full-Matrix Capture (FMC) with individual reflectors located at one point at a time of a determined imaging grid. This experimental CB (Exp CB) approach makes fewer model assumptions than the numerical CB (Num CB) method, meaning that the baseline is closer to reality, resulting in better spatial resolution and contrast of the imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Phase aberration is one the key sources of image degradation in handheld B-mode ultrasound imaging. Sound speed heterogeneities create phase aberrations in the image by inducing additional tissue-dependent delays and diffractive effects that conventional beamforming does not incorporate. For this reason, the Fourier split-step angular spectrum method is used to simulate pressure fields in a heterogeneous sound speed medium and create B-mode images based on the cross-correlation of transmitted and received wavefields. Because the strongest aberrations are caused by a laterally varying sound speed profile, this work presents a new sound speed estimator that can be used to correct for aberrations in laterally varying media. Phantom experiments show a 58-76% improvement in point target resolution and a 2.5x improvement in contrast-to-noise ratio because of the proposed sound speed estimation and phase aberration correction scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unlike traditional ultrasound (US) transducers with rigid casing, flexible array transducers can be deformed to patientspecific geometries, thus potentially removing user dependence during real-time monitoring in radiotherapy. Proper transducer geometry estimation is required for the transducer's delay-and-sum (DAS) beamforming algorithm to reconstruct B-mode US images. The main contribution of this work is to track each element's position of the transducer to improve the quality of reconstructed images. An NDI Polaris Spectra infrared tracker was used to localize the custom design optical markers and interfaced using the Plus toolkit to estimate the transducer geometry in real-time. Each marker was localized with respect to a reference marker. Each element's coordinate position and azimuth angle were estimated using a polygon fitting algorithm. Finally, DAS was used to reconstruct the US image from radio-frequency channel data. Various transducer curvatures were emulated using gel padding placed on a CIRS phantom. The geometric accuracy of localizing the optical markers attached to the transducer surface was evaluated using 3D Cone-Beam Computed Tomography (CBCT). The tracked element positions' deviations compared to the CBCT images were measured to be 0.50±0.29 mm. The Dice score for the segmented target structure from reconstructed US images was 95.1±3.3% for above mentioned error in element position. We have obtained a high accuracy (<1mm error) when tracking the element positions with different random curvatures. The proposed method can be used for reconstructing US images to assist in real-time monitoring of radiotherapy, with minimal user dependence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D ultrasound elastography has the potential to be a fast and accurate approach for estimating tissue shear modulus from the wave equation. One of the drawbacks of 3D imaging is the long data acquisition time which reduces practicality and can introduce artifacts from unwanted tissue motion such as from breathing. This paper presents a novel imaging technique for real-time 3D data collection with a volume rate of 2000 volumes/s and total acquisition time of 0.05 s. Plane wave imaging is used with Shear Wave Absolute Vibro-Elastography (S-WAVE) where an external mechanical vibration source generates multi-frequency shear waves in tissue. A matrix array transducer is used to collect volumes of radio frequency (RF) data. Axial, lateral and elevational displacements are then estimated over 3D volumes. The curl of the displacements is used in a local frequency estimation technique to estimate elasticity in the acquired volumes. The S-WAVE excitation frequency is substantially extended up to 500 Hz by using a high volume rate imaging. Therefore, the method can be applied on a wide range of applications such as prostate, breast, liver, and thyroid elastography. Results on liver fibrosis phantoms using several excitation frequencies are presented and compared with the manufacturer elasticity values for validation. The estimated elasticities by the proposed method fall within the manufacturer’s elasticity range for all frequencies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasound + Image-Guided Procedures: Joint Session with Conferences 12466 and 12470
The first carpometacarpal (CMC-1) joint is a common site of osteoarthritis (OA). The joint disease commonly presents with inflammation of the synovial membrane, synovitis. Inflammation and the formation of new blood vessels, angiogenesis, are integrated processes. Increased blood flow, angiogenesis and inflammation of the synovial tissue can contribute to symptoms of OA. The role angiogenesis plays in pathogenesis and disease progression is not fully understood. Imaging modalities, such as power Doppler (PD) ultrasound (US) can detect blood flow. Recently, a new Doppler ultrasound technique, superb microvascular imaging (SMI), was developed and uses an algorithm that can more effectively visualize low-velocity blood flow. To better understand the role of angiogenesis in OA and to visualize the three-dimensional (3D) vasculature, we developed a 3DUS system. This paper is a preliminary study, which demonstrates our 3DUS system acquiring PD and SMI images for CMC-1 OA to provide quantification as well as improved blood flow visualization. As part of a clinical trial, a patient presenting with CMC-1 OA was imaged using 3DUS PD and SMI technologies to quantify the synovial volume and Doppler signals. We found synovial Doppler signals present in 3D PD and SMI images. To optimize the temperature of the device scanning solution, healthy volunteers were imaged at increasing temperatures. The Doppler signals in the blood vessels were quantified and we observed an increase in Doppler signal with higher temperatures. This work demonstrates the ability of the 3DUS PD and SMI system to detect, quantify, and visualize vessel and synovial blood flow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synovial inflammation is increasingly appreciated as a key feature in osteoarthritis pathogenesis and pain, but the gold standard method of measuring synovitis (MRI) is inaccessible in routine clinical care. 3-dimensional ultrasound could present a potential solution if it demonstrates good measurement properties in suprapatellar recess synovitis. We recruited five knee osteoarthritis patients awaiting knee replacement, who received both MRI and 3DUS imaging of the knee on the same day. By manually segmenting synovitis on 3DUS and MRI images, we found that 3DUS has excellent intra/inter-rater reliability for synovitis volume and differs from MRI segmentations by approximately 26.73%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasound Image Quantification and Classification
Precise detection of hepatocellular carcinoma (HCC) is crucial for early cancer screening in medical ultrasound. Attenuation coefficient (AC) is emerging as a new biomarker for classifying tumors since it is sensitive to pathological changes in tissues. In this paper, a learning-based method to reconstruct AC image of abdominal regions from pulse-echo data obtained with a single ultrasound convex probe is presented. In the proposed method, the propagation delay caused by the variation of the sound speed of the medium is considered in the training phase to increase the reconstruction accuracy of the distant targets. In addition, the proposed network adaptively compensates the feature map according to the location of target area for accuracy. The proposed network was evaluated through simulation, and in-vivo tests. In simulation tests, the proposed network showed 3.8dB and 6% improvement over the baseline methods in PSNR and SSIM, respectively. In the in-vivo test, the proposed method classifies cysts, benign tumors and malignant tumors in the abdomen with a p-value of less than 0.02. The accuracy and robustness demonstrated by the proposed method show the broad clinical applicability of quantitative imaging in abdominal ultrasound.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Molecular ultrasound imaging is used to image the expression of specific proteins on the surface of blood vessels using the conjugated microbubbles (MBs) that can bind to the targeted proteins, which makes MBs ideal for imaging the protein expressed on blood vessels. However, how to optimally apply MBs in an ultrasound imaging system to detect and quantify the targeted protein expression needs further investigation. To address this issue, objective of this study is to investigate feasibility of developing and applying a new quantitative imaging marker to quantify the expression of protein markers on the surface of cancer cells. To obtain a numeric value proportional to the amount of MBs that bind to the target protein, a standard method for quantification of MBs is applying a destructive pulse, which bursts most of the bubbles in the region of interest. The difference between the signal intensity before and after destruction is used to measure the differential targeted enhancement (dTE). In addition, a dynamic kinetic model is applied to fit the timeintensity curves and a structural similarity model with three metrics is used to detect the differences between images. Study results show that the elevated dTE signals in images acquired from the targeted (MBTar) and isotype (MBIso) are significantly different (p<0.05). Quantitative image features are also successfully computed from the kinetic model and structural similarity model, which provide potential to identify new quantitative image markers that can more accurately differentiate the targeted microbubble status.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasound contrast agents (UCA) are gas-encapsulated microspheres that oscillate volumetrically when exposed to an ultrasound field producing backscattered signals efficiently, which can be used for improved ultrasound imaging and drug delivery applications. We developed a novel oxygen-sensitive hemoglobin-shell microbubble designed to acoustically detect blood oxygen levels. We hypothesize that structural change in hemoglobin caused due to varying oxygen levels in the body can lead to mechanical changes in the shell of the UCA. This can produce detectable changes in the acoustic response that can be used for measuring oxygen levels in the body. In this study, we have shown that oxygenated hemoglobin microbubbles can be differentiated from deoxygenated hemoglobin microbubbles using a 1D convolutional neural network using radiofrequency (RF) data. We were able to classify RF data from oxygenated and deoxygenated hemoglobin microbubbles into the two classes with a testing accuracy of 90.15%. The results suggest that oxygen content in hemoglobin affects the acoustical response and may be used for determining oxygen levels and thus could open many applications, including evaluating hypoxic regions in tumors and the brain, among other blood-oxygen-level-dependent imaging applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using waveform-based inversion methods within transcranial ultrasound computed tomography is an attractive emerging reconstruction technique for imaging the human brain. However, such imaging approaches generally rely on possessing an accurate model of the skull in order to account for the complex interactions which occur when the ultrasound waves propagate between soft tissue and bone. In order to recover the shape of the skull within the context of full-waveform inversion, adjoint-based shape optimization is performed within this study. The gradients with respect to the acoustic properties of the tissues which are used in conventional full-waveform inversion act as a proxy for estimating the sensitivities to the shape of the skull. These shape derivatives can be utilized to update the interface between the interior brain tissue and the skull. This technique employs the spectral-element method for solving the wave equation and, thus, allows for the use of a convenient framework for representing the skull interfaces throughout the inversion. Adaptations of the Shepp-Logan phantom are used as a proof of concept to demonstrate this inversion strategy where both the shape of the skull as well as the interior brain tissue are imaged sequentially.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The convergence of waveform inversion in ultrasound tomography is heavily impacted by the choice of starting model. Ray tomography is often used as the starting model for waveform inversion; however, artifacts resulting from ray tomography can continue to persist during waveform inversion. On the other hand, a homogeneous starting model for waveform inversion may result in cycle skipping artifacts if the frequency of the transmitted waveform is too high or the error between the starting model and ground-truth is too large. Clinical in vivo breast data suggests that waveform inversion from a homogeneous starting model is sufficient for an accurate reconstruction of the speed of sound if the starting model is close enough to the average speed of sound in the medium and the starting frequency for waveform inversion is sufficiently low to avoid cycle skipping. Comparing the results of waveform inversion using ray tomography and a homogeneous sound speed as initial models, the homogeneous starting model avoids oscillatory artifacts produced by ray tomography at the edges of the breast. Although the RMS error between the two waveform inversion results is 29.6 m/s, most of the error is the result of reconstruction artifacts at the edges of the breast. When the RMS error is measured inside the breast away from its boundaries, this RMS error drops down to 11.5 m/s.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasound computed tomography (USCT) is an emerging medical imaging modality that holds great promise for breast cancer diagnosis. Full-waveform inversion (FWI)-based image reconstruction methods for USCT can produce high spatial resolution and accurate images of the acoustic properties of soft tissues. A common USCT design employs a circular ring-array comprised of elevation-focused ultrasonic transducers. Volumetric imaging can be achieved by translating the ring-array orthogonally to the imaging plane. Slice-by-slice two-dimensional (2D) reconstruction methods have been implemented to form a composite three-dimensional (3D) volumes by stacking together reconstructed cross-sectional images at each ring-array position. However, this 2D approach does not account for the 3D wave propagation physics and the focusing properties of the transducers, and can result in out-of-plane scattering-based artifacts and inaccuracies. To overcome this, a new 3D time-domain FWI method is proposed for ring-array-based USCT that concurrently utilizes measurement data acquired from multiple positions of the ring-array. A virtual imaging study of ring-array-based USCT that employs a realistic 3D numerical breast phantom was conducted to assess the impact of the number of ring-array measurements on image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasound computed tomography (USCT) has the potential to detect breast cancer by measuring tissue acoustic properties such as speed-of-sound (SOS). Current USCT image reconstruction methods for SOS fall into two categories, each with its own limitations. Ray-based methods are computationally efficient but suffer from low spatial resolution due to neglecting scattering effects, while full-waveform inversion (FWI) methods offer higher spatial resolution but are computationally intensive, limiting their widespread application. To address these issues, a deep learning (DL)-based method is proposed for USCT breast imaging that achieves SOS reconstruction quality comparable to FWI while remaining computationally efficient. This method leverages the computational efficiency and high-quality image reconstruction capabilities of DL-based methods, which have shown promise in various medical image reconstruction problems. Specifically, low-resolution SOS images estimated by ray-based traveltime tomography and reflectivity images from reflection tomography are employed as inputs to a U-Net-based image reconstruction method. These complementary images provide direct SOS information (via traveltime tomography) and tissue boundary information (via reflectivity tomography). The U-Net is trained in a supervised manner to map the two input images into a single, high-resolution image of the SOS map. Numerical studies using realistic numerical breast phantoms show promise for improving image quality compared to naïve, single-input U-Net-based approaches, using either traveltime or reflection tomography images as inputs. The proposed DL-based method is computationally efficient and may offer a practical solution for enhancing SOS reconstruction quality, which could potentially improve diagnostic accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Clear identification of bone structures is crucial for ultrasound-guided lumbar interventions, but it can be challenging due to the complex shapes of the self-shadowing vertebra anatomy and the extensive background speckle noise from the surrounding soft tissue structures. Therefore, in this work, we will present our method for estimating the vertebra bone surfaces by using a spatiotemporal U-Net architecture learning from the B-Mode image and aggregated feature maps of hand-crafted filters. Additionally, we are integrating this solution with our patch-like wearable ultrasound system to capture the repeating anatomical patterns and image the bone surfaces from multiple insonification angles. 3D bone representations can then be created for interventional guidance. The methods are evaluated on spine phantom image data collected by our proposed “Patch” scanner, and our systematic ablation experiment shows that improved accuracy can be achieved with the proposed architecture. Equipped with this surface estimation network, our wearable ultrasound system can potentially provide intuitive and accurate interventional guidance for clinicians in an augmented reality setting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Endometriosis is a non-malignant disorder that affects 176 million women globally. Diagnostic delays result in severe dysmenorrhea, dyspareunia, chronic pelvic pain, and infertility. Therefore, there is a significant need to diagnose patients at an early stage. Our objective in this work is to investigate the potential of deep learning methods to classify endometriosis from ultrasound data. Retrospective data from 100 subjects were collected at the Rutgers Robert Wood Johnson University Hospital (New Brunswick, NJ, USA). Endometriosis was diagnosed via laparoscopy or laparotomy. We designed and trained five different deep learning methods (Xception, Inception-V4, ResNet50, DenseNet, and EfficientNetB2) for the classification of endometriosis from ultrasound data. Using 5-fold cross-validation study we achieved an average area under the receiver operator curve (AUC) of 0.85 and 0.90 respectively for the two evaluation studies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasound (US) elastography is a technique that enables non-invasive quantification of material properties, such as stiffness, from ultrasound images of deforming tissue. The displacement field is measured from the US images using image matching algorithms, and then a parameter, often the elastic modulus, is inferred or subsequently measured to identify potential tissue pathologies, such as cancerous tissues. Several traditional inverse problem approaches, loosely grouped as either direct or iterative, have been explored to estimate the elastic modulus. Nevertheless, the iterative techniques are typically slow and computationally intensive, while the direct techniques, although more computationally efficient, are very sensitive to measurement noise and require the full displacement field data (i.e., both vector components). In this work, we propose a deep learning approach to solve the inverse problem and recover the spatial distribution of the elastic modulus from one component of the US measured displacement field. The neural network used here is trained using only simulated data obtained via a forward finite element (FE) model with known variations in the modulus field, thus avoiding the reliance on large measurement data sets that may be challenging to acquire. A U-net based neural network is then used to predict the modulus distribution (i.e., solve the inverse problem) using the simulated forward data as input. We quantitatively evaluated our trained model with a simulated test dataset and observed a 0.0018 mean squared error (MSE) and a 1.14% mean absolute percent error (MAPE) between the reconstructed and ground truth elastic modulus. Moreover, we also qualitatively compared the output of our U-net model to experimentally measured displacement data acquired using a US elastography tissue-mimicking calibration phantom.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
On one hand, the transmitted ultrasound beam gets attenuated as propagates through the tissue. On the other hand, the received Radio-Frequency (RF) data contains an additive Gaussian noise which is brought about by the acquisition card and the sensor noise. These two factors lead to a decreasing Signal to Noise Ratio (SNR) in the RF data with depth, effectively rendering deep regions of B-Mode images highly unreliable. There are three common approaches to mitigate this problem. First, increasing the power of transmitted beam which is limited by safety threshold. Averaging consecutive frames is the second option which not only reduces the framerate but also is not applicable for moving targets. And third, reducing the transmission frequency, which deteriorates spatial resolution. Many deep denoising techniques have been developed, but they often require clean data for training the model, which is usually only available in simulated images. Herein, a deep noise reduction approach is proposed which does not need clean training target. The model is constructed between noisy input-output pairs, and the training process interestingly converges to the clean image that is the average of noisy pairs. Experimental results on real phantom as well as ex vivo data confirm the efficacy of the proposed method for noise cancellation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objective of this study is to develop a computer-aided diagnosis (CADx) system for successful ultrasound-guided supraclavicular block (SCB). The retrospectively collected ultrasound videos were from 800 patients to develop the CADx system (600 to the training and validation set, and two test sets of 100 each). The proposed method consists of classification and segmentation approaches using convolutional neural networks (CNN). As for the classification method, a ResNet-based model using augmentation technique, GRU module, and self-supervised learning method were added to the comparison experiment. The segmentation approach used ResNet as encoder for U-Net, and it is a cascaded structure that trains the classification model successively by using the prediction result of the U-Net as pseudo labels. As a result of the classification and segmentation approaches, the ResNet layer did not improve performance further in layers deeper than 34, but applying the augmentation methods was effective. In addition, it was confirmed that the classification approach improved performance when the GRU modules were added, but was not suitable for real-time setting. The proposed approaches showed the highest performance with accuracy 0.88, 0.883, precision 0.578, 0.621, recall 0.712, 0.601, F1-score 0.639, 0.609, and AUROC 0.913, 0.919, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stroke is a leading cause of morbidity and mortality throughout the world. Three-dimensional ultrasound (3DUS) imaging was shown to be more sensitive to treatment effect and more accurate in stratifying stroke risk than two-dimensional ultrasound (2DUS) imaging. Point-of-care ultrasound screening (POCUS) is important for patients with limited mobility and at times when the patients have limited access to the ultrasound scanning room, such as in the COVID-19 era. We used an optical tracking system to track the 3D position and orientation of the 2DUS frames acquired by a commercial wireless ultrasound system and subsequently reconstructed a 3DUS image from these frames. The tracking requires spatial and temporal calibrations. Spatial calibration is required to determine the spatial relationship between the 2DUS machine and the tracking system. Spatial calibration was achieved by localizing the landmarks with known coordinates in a custom-designed Z-fiducial phantom in an 2DUS image. Temporal calibration is needed to synchronize the clock of the wireless ultrasound system and the optical tracking system so that position and orientation detected by the optical tracking system can be registered to the corresponding 2DUS frame. Temporal calibration was achieved by initiating the scanning by an abrupt motion that can be readily detected in both systems. This abrupt motion establishes a common reference time point, thereby synchronizing the clock in both systems. We demonstrated that the system can be used to visualize the three-dimensional structure of a carotid phantom. The error rate of the measurements is 2.3%. Upon in-vivo validation, this system will allow POCUS carotid scanning in clinical research and practices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In prostate brachytherapy, focal boost on dominant intraprostatic lesions (DILs) can reduce the recurrence rate while keeping low toxicity. In recent years, ultrasound (US) prostate tissue characterization has demonstrated the feasibility in detecting dominant intraprostatic lesions. With recent developments in computer-aided diagnosis (CAD), deep learningbased methods have provided solutions for efficient analysis of US images. In this study, we aim to develop a Shiftedwindows (Swin) Transformer-based method for DIL classification. The self-attention layers in Swin Transformer allow efficient feature discrimination between benign tissues and intraprostatic lesions. We simplified the structure of Swin Transformer to avoid overfitting on a small dataset. The proposed transformer structure achieved 83% accuracy and 0.86 AUC at patient level on three-fold cross validation, demonstrating the feasibility of applying our method for dominant lesion classification from US images, which is of clinical significance for radiotherapy treatment planning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multistatic synthetic aperture (SA) imaging allows for dynamic focusing for all points in the image, in contrast to conventional beamforming techniques. As further improvement of image quality the use of frequency modulated continuous wave (FMCW) systems could be considered. However, algorithms need to be adapted to handle imaging in these settings efficiently. In this paper, the use of a Fourier-based imaging (FBI) method from microwave radar technology is proposed for an ultrasound system suitable for FMCW operation. The FBI method is compared to the delay-and-sum (DAS) method regarding the theoretical time complexity using the random access machine computer model. Run-time measurements using C++ implementations as well as the comparison of image quality using the array performance index are conducted. It could be shown, that the FBI method has an overall better time complexity than DAS within FMCW settings. Moreover, the FBI method consistently outperforms DAS regarding its image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasound (US) radiomics analysis is an emerging research field to overcome clinician’s subjectivity of visual image assessment and interpretation. However, its clinical utility is still limited and the efficacy depends on the robustness of radiomic features. The purpose of this work is to evaluate the robustness of US radiomic features with various scanning settings, including central frequency, focal length, and overall Brightness Gain (BG). We tested the concept with Grey Level Co-occurrence Matrix (GLCM) features. All US images were acquired using a Hitachi Noblus US system and a bi-plane probe (EUP-U533C). The study utilized three materials: a tissue-mimicking phantom, beef muscle, and chicken breast. A total of 21 GLCM features were extracted from the US images. The relative percentage change was calculated as the standard deviation (STDEV) or the maximum difference (max difference) divided by the absolute mean value of each GLCM feature, varying in the BG = 19-29 zone. Among the 21 extracted GLCM features, we found seven robust features, namely differenceEntropy, entropy, homogeneity1, IDMN, IDN, inverseVariance, and sumEntropy, enduring within 10% variances when varying frequency, focal length, and BG settings. The results of this study indicate that some US radiomic features may be affected by scanning parameters, while others are more robust to these variations. As radiomics is expected to be a critical component for the integration of image-derived information to personalize treatment in the future, the robust features should be carefully chosen to obtain reliable radiomics-based analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Breast cancer is the most commonly diagnosed cancer in women in the United States. Early detection of breast tumors enables prompt determination of cancer status, significantly boosting patient survival rate. Non-invasive and non-ionizing ultrasound imaging is a widely used diagnosing modality in clinic. To assist clinicians in breast cancer diagnosis, we implemented a vision graph neural networks (ViG)-based pipeline that can achieve accurate binary classification (normal vs. breast tumor) and multiclass classification (normal, benign, and malignant) from breast ultrasound images. Our results demonstrated that the average accuracy of ViG is 100.00% for binary and 87.18% for multiclass classification tasks. To the best of our knowledge, this is the first end-to-end, graph-feature-based deep learning pipeline to achieve accurate breast tumor detection from ultrasound images. The proposed ViG-based classifier is accessible for clinical implementation and has the potential to enhance lesion detection from ultrasound images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasound (US) imaging is a widely used imaging modality for tumor diagnosis, image-guided intervention, and therapy response assessment in cancer management. However, one major limitation hindering the use of quantitative US is the lack of reproducibility when applied across different institutions and clinical settings. We propose a histogram-based method to examine the imaging reproducibility of US scanners. We tested this method on 8 portable US devices, which included 4 convex and linear dual head scanners and 4 transvaginal probes, providing 3 types of transducers and containing 4 probes of each model. B-mode images were obtained from a Sun Nuclear phantom with fixed scanning settings, and a region of interest (ROI) capturing the uniform background tissue was used to obtain a pixel intensity histogram. Eight histogram-based features were calculated: entropy, meanDeviation, uniformity, mean, median, variance, root-mean-square (RMS), and standard deviation (STDEV). The feature variances among the 4 probes in each group were used to assess their imaging reproducibility. For the convex US transducers, most histogram features varied within 5%. For the linear US, all histogram features varied within 15%. For the transvaginal US, most histogram features varied within 10%. These experiments provide valuable reproducibility measurements of the portable US devices, which are critical for performing multi-center quantitative studies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Calculating cardiac strains through speckle tracking echocardiography (STE) has shown promise as prognostic markers linked to functional indices and disease outcomes. However, the presence of acoustic shadowing often challenges the accuracy of STE in small animals such as rodents. The shadowing arises due to the complex anatomy of rodents, with operator dexterity playing a significant role in image quality. The effects of the semi-transparent shadows are further exacerbated in right ventricular (RV) imaging due to the thinness and rapid motion of the RV free wall (RVFW). The movement of the RVFW across the shadows distorts speckle tracking and produces unnatural and non-physical strains. The objective of this study was to minimize the effects of shadowing on STE by distinguishing “out-of-shadow” motion and identifying speckles in and out of shadow. Parasternal 2D echocardiography was performed, and short-axis B-mode (SA) images of the RVFW were acquired for a rodent model of pulmonary hypertension (n = 1). Following image acquisition, a denoising algorithm using edge-enhancing anisotropic diffusion (EED) was implemented, and the ensuing effects on strain analysis were visualized using a custom STE pipeline. Speckles in the shadowed regions were identified through a correlation between the filtered image and the original acquisition. Thus, pixel movement across the boundary was identified by enhancing the distinction between the shadows and the cardiac wall, and non-physical strains were suppressed. The strains obtained through STE showed expected patterns with enhanced circumferential contractions in the central region of the RVFW in contrast to smaller and nearly uniform strains derived from the unprocessed images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the domain of brain imaging of small animals including rats, ultrasound (US) imaging is an appealing tool because it offers a high frame rate, easy access, and involves no radiation. However, the rat skull causes artifacts that influence brain image quality in terms of contrast and resolution. Therefore, minimizing the skull-induced artifacts in US imaging is a significant challenge. Unfortunately, the amount of literature on rat skull-induced artifacts is limited, and there is a particular lack of studies exploring reducing skull-induced artifacts. Due to the difficulty of experimentally imaging the same rat brain with and without a skull, numerical simulation becomes a reasonable approach to studying skull-induced artifacts. In this work, we investigated the effects of skull-induced artifacts by simulating a grid of point targets inside the skull cavity and quantifying the pattern of skull-induced artifacts. With the capacity to automatically capture the artifact pattern given a large amount of paired training data, deep learning (DL) models can effectively reduce image artifacts in multiple modalities. This work explored the feasibility of using DL-based methods to reduce skull-induced artifacts in US imaging. Simulated data were used to train a U-Net-derived, image-to-image regression network. US channel data with artifact signals served as inputs to the network, and channel data with reduced artifact signals were the regression outcomes. Results suggest the proposed method can reduce skull-induced artifacts and enhance target signals in B-mode images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quantitative ultrasound (QUS) aims to find properties of scatterers which are related to the tissue microstructure. Among different QUS parameters, scatterer number density has been found to be a reliable biomarker to detect different abnormalities. The homodyned K-distribution (HK-distribution) is a model for the probability density function of the ultrasound echo amplitude that can model different scattering scenarios but requires a large number of samples to be estimated reliably. Parametric images of HK-distribution parameters can be formed by dividing the envelope data into small overlapping patches and estimating parameters within the patches independently. This approach imposes two limiting constraints: the HK-distribution parameters are assumed to be constant within each patch, and each patch requires enough independent samples. In order to mitigate those problems, we employ a deep learning approach to estimate parametric images of scatterer number density (related to HK-distribution shape parameter) without patching. Furthermore, an uncertainty map of the network’s prediction is quantified to provide insight about the confidence of the network about the estimated HK parameter values.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Freehand (FH) 3D ultrasound (US) imaging is emerging as a promising modality for spine imaging because it is non-invasive and inexpensive. Among the vertebral landmarks that can be used to represent the spine, paired laminae can play a vital role in a transverse scan for 3D spine deformity analysis by providing symmetry information. However, there is currently no laminae landmark recognition algorithm that has been tested on poor-quality 2D US scans. In this study, we propose a deep learning framework to automatically and simultaneously assess the presence of two laminae and estimate their landmark coordinates for the purpose of live US-based assessment of spine shape. To label the training data, we propose a labeling protocol based on a weight distribution on the virtual bone surface to make the pixel representative most likely the closest pixel to the spinal cord. In total, 6 FH 3D US sequences of the spine covering vertebrae T1 to L5 were collected from 3 participants. They were labeled based on the proposed protocol and validated by two spine ultrasound experts. The performance of the deep learning-based lamina landmark detection method was assessed through K-Fold cross-validation, with results reaching a mean distance error of 2.1 ± 1.3(mm) and 1.8 ± 1.2(mm) in true-positive images for left and right lamina landmarks respectively. Our method could allow for live laminae landmark extraction during clinical US spine exams, which would be useful for spinal ultrasound image interpretation, vertebral level identification, and spine deformity analysis in 3D based on paired laminae landmarks projected on three anatomical planes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most cases of cardiovascular diseases, including peripheral arterial disease (PAD) in the lower limb, could be prevented by a healthy diet and refrainment from smoking. Yet, stent placement is the primary course of treatment to alleviate advanced symptoms of stenosis in the superficial femoral artery (SFA) for people who have already developed PAD. It has been observed that normal stents, which are straight in shape, prevent the naturally-occurring swirling flow to form inside the SFA. Recently, a 3D helical stent has been developed for the SFA, with the assumption that the helical shape would induce swirling flow inside the artery. Swirling flow, in turn, could promote higher wall-shear stress and enhance the durability of the treatment. The aim of this study is to investigate the effects of the helical stent on flow in an in-vitro setup, using contrast-enhanced 2D ultrasound Particle Image Velocimetry (PIV) or echo-PIV. As swirling flow is a three-dimensional phenomenon with out-of-plane velocity components, the focus is on finding its signatures in the 2D ultrasound images taken from the helical stent outlet in lieu of imaging the swirling flow itself. Therefore, the regions of interest are the intel and outlet of straight and helical models, where the main analysis is done. Initial experiments and the ensuing analysis show that vector complexity and maximum vorticity are significantly higher in the outlet of the helical model, when compared to its own inlet or the outlet of the straight model. These measures serve as indicators of swirling flow in the helical stent. The implications of these results must be further investigated in patients and whether or how they may benefit them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Transrectal ultrasound (TRUS) images have real-time and low-cost advantages. It is essential for preoperative diagnosis and intraoperative treatment of the prostate to segment prostates from TRUS images. In this paper, an Adaptive Detail Compensation Network (ADC-Net) for 3D prostate segmentation is proposed, which utilizes the convolutional neural networks (CNN) in deep learning to realize the automatic segmentation of TRUS images. The proposed method is consisting of a U-Net-based backbone network, a detail compensation module, three spatial-based attention modules, and an aggregation fusion module. A pre-trained ResNet-34 as the detail compensation module is utilized to compensate for the loss of detailed information caused by the down-sampling process of the U-Net encoder. The proposed method uses the spatial-based attention module to introduce multilevel features to refine single-layer features, thereby suppressing the useless background influence and enriching the contextual information of the foreground. Finally, to obtain a predicted prostate, the aggregation fusion module fuses refined single-layer features to further enrich the prostate semantic information and filter out other irrelevant information in TRUS images. Furthermore, a deep supervision mechanism applied in our method also plays an irreplaceable role in network training. Experimental results show that the proposed ADC-Net has achieved satisfactory results in the 3D TRUS image segmentation of prostates, providing accurate detection of prostate regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.