In ultrasound (US)-guided medical procedures, accurate tracking of interventional tools is crucial to patient safety and clinical outcome. This requires a calibration procedure to recover the relationship between the US image and the tracking coordinate system. In literature, calibration has been performed on passive phantoms, which depend on image quality and parameters, such as frequency, depth, and beam-thickness as well as in-plane assumptions. In this work, we introduce an active phantom for US calibration. This phantom actively detects and responds to the US beams transmitted from the imaging probe. This active echo (AE) approach allows identification of the US image midplane independent of image quality. Both target localization and segmentation can be done automatically, minimizing user dependency. The AE phantom is compared with a crosswire phantom in a robotic US setup. An out-of-plane estimation US calibration method is also demonstrated through simulation and experiments to compensate for remaining elevational uncertainty. The results indicate that the AE calibration phantom can have more consistent results across experiments with varying image configurations. Automatic segmentation is also shown to have similar performance to manual segmentation.
KEYWORDS: Ultrasonography, Optical tracking, Ferroelectric materials, Data modeling, 3D acquisition, Data acquisition, Field programmable gate arrays, Breast, Receivers, Signal detection
Ultrasound-guided needle tracking systems are frequently used in surgical procedures. Various needle tracking technologies have been developed using ultrasound, electromagnetic sensors, and optical sensors. To evaluate these new needle tracking technologies, 3D volume information is often acquired to compute the actual distance from the needle tip to the target object. The image-guidance conditions for comparison are often inconsistent due to the ultrasound beam-thickness. Since 3D volumes are necessary, there is often some time delay between the surgical procedure and the evaluation. These evaluation methods will generally only measure the final needle location because they interrupt the surgical procedure. The main contribution of this work is a new platform for evaluating needle tracking systems in real-time, resolving the problems stated above. We developed new tools to evaluate the precise distance between the needle tip and the target object. A PZT element transmitting unit is designed as needle introducer shape so that it can be inserted in the needle. We have collected time of flight and amplitude information in real-time. We propose two systems to collect ultrasound signals. We demonstrate this platform on an ultrasound DAQ system and a cost-effective FPGA board. The results of a chicken breast experiment show the feasibility of tracking a time series of needle tip distances. We performed validation experiments with a plastisol phantom and have shown that the preliminary data fits a linear regression model with a RMSE of less than 0.6mm. Our platform can be applied to more general needle tracking methods using other forms of guidance.
Controlling the thermal dose during ablation therapy is instrumental to successfully removing the tumor while preserving the surrounding healthy tissue. In the practical scenario, surgeons must be able to determine the ablation completeness in the tumor region. Various methods have been proposed to monitor it, one of which uses ultrasound since it is a common intraoperative imaging modality due to its non-invasive, cost-effective, and convenient natures. In our approach, we propose to use time of flight (ToF) information to estimate speed of sound changes. Accurate speed of sound estimation is crucial because it is directly correlated with temperature change and subsequent determination of ablation completeness. We divide the region of interest in a circular fashion with a variable radius from the ablator tip. We introduce the concept of effective speed of sound in each of the sub-regions. Our active PZT element control system facilitates this unique approach by allowing us to acquire one-way ToF information between the PZT element and each of the ultrasound elements. We performed a simulation and an experiment to verify feasibility of this method. The simulation result showed that we could compute the effective speed of sound within 0.02m/s error in our discrete model. We also perform a sensitivity analysis for this model. Most of the experimental results had less than 1% error. Simulation using a Gaussian continuous model with multiple PZT elements is also demonstrated. We simulate the effect of the element location one the optimization result.
Ultrasonography is a widely used imaging modality to visualize anatomical structures due to its low cost and ease of use; however, it is challenging to acquire acceptable image quality in deep tissue. Synthetic aperture (SA) is a technique used to increase image resolution by synthesizing information from multiple subapertures, but the resolution improvement is limited by the physical size of the array transducer. With a large F-number, it is difficult to achieve high resolution in deep regions without extending the effective aperture size. We propose a method to extend the available aperture size for SA—called synthetic tracked aperture ultrasound (STRATUS) imaging—by sweeping an ultrasound transducer while tracking its orientation and location. Tracking information of the ultrasound probe is used to synthesize the signals received at different positions. Considering the practical implementation, we estimated the effect of tracking and ultrasound calibration error to the quality of the final beamformed image through simulation. In addition, to experimentally validate this approach, a 6 degree-of-freedom robot arm was used as a mechanical tracker to hold an ultrasound transducer and to apply in-plane lateral translational motion. Results indicate that STRATUS imaging with robotic tracking has the potential to improve ultrasound image quality.
A requirement to reconstruct photoacoustic (PA) image is to have a synchronized channel data acquisition with laser firing. Unfortunately, most clinical ultrasound (US) systems don’t offer an interface to obtain synchronized channel data. To broaden the impact of clinical PA imaging, we propose a PA image reconstruction algorithm utilizing US B-mode image, which is readily available from clinical scanners. US B-mode image involves a series of signal processing including beamforming, followed by envelope detection, and end with log compression. Yet, it will be defocused when PA signals are input due to incorrect delay function. Our approach is to reverse the order of image processing steps and recover the original US post-beamformed radio-frequency (RF) data, in which a synthetic aperture based PA rebeamforming algorithm can be further applied. Taking B-mode image as the input, we firstly recovered US postbeamformed RF data by applying log decompression and convoluting an acoustic impulse response to combine carrier frequency information. Then, the US post-beamformed RF data is utilized as pre-beamformed RF data for the adaptive PA beamforming algorithm, and the new delay function is applied by taking into account that the focus depth in US beamforming is at the half depth of the PA case. The feasibility of the proposed method was validated through simulation, and was experimentally demonstrated using an acoustic point source. The point source was successfully beamformed from a US B-mode image, and the full with at the half maximum of the point improved 3.97 times. Comparing this result to the ground-truth reconstruction using channel data, the FWHM was slightly degraded with 1.28 times caused by information loss during envelope detection and convolution of the RF information.
Ultrasound (US) tomography enables quantitative measurement of acoustic properties. Robot assisted ultrasound tomography system enables alignment of two US probes. The alignment is done automatically by the robotic arm so that tomographic reconstruction of more anatomies becomes possible. In this study, we propose a new system setup for robot assistance in US tomographic imaging. This setup includes two robotic arms holding two US probes. One of the robotic arms is operated by the sonographer to determine the desired location for the tomographic imaging; this probe can also provide the B-mode US image during the search. The other robotic arm can then move automatically to align the two probes. One of the probes will act as transmitter and the other one as receiver to enable tomographic imaging. We provide an overview of the system setup and components together with the calibration procedures. In an attempt to provide a complete framework for the tomography system, we also provide a sample tomographic reconstruction method that can reconstruct speed of sound image using two aligned linear US probes. The reconstruction algorithm is however very prone to alignment inaccuracies. We provide an error propagation analysis to provide an estimation of the overall alignment error and then show the effect of the in-plane translational error in the tomographic reconstruction.
Image-guided surgery systems are often used to provide surgeons with informational support. Due to several unique
advantages such as ease of use, real-time image acquisition, and no ionizing radiation, ultrasound is a common
intraoperative medical imaging modality used in image-guided surgery systems. To perform advanced forms of guidance
with ultrasound, such as virtual image overlays or automated robotic actuation, an ultrasound calibration process must be
performed. This process recovers the rigid body transformation between a tracked marker attached to the transducer and
the ultrasound image. Point-based phantoms are considered to be accurate, but their calibration framework assumes that
the point is in the image plane. In this work, we present the use of an active point phantom and a calibration framework
that accounts for the elevational uncertainty of the point. Given the lateral and axial position of the point in the
ultrasound image, we approximate a circle in the axial-elevational plane with a radius equal to the axial position. The
standard approach transforms all of the imaged points to be a single physical point. In our approach, we minimize the
distances between the circular subsets of each image, with them ideally intersecting at a single point. We simulated in
noiseless and noisy cases, presenting results on out-of-plane estimation errors, calibration estimation errors, and point
reconstruction precision. We also performed an experiment using a robot arm as the tracker, resulting in a point
reconstruction precision of 0.64mm.
As thermal imaging attempts to estimate very small tissue motion (on the order of tens of microns), it can be negatively influenced by signal decorrelation. Patient's breathing and cardiac cycle generate shifts in the RF signal patterns. Other sources of movement could be found outside the patient's body, like transducer slippage or small vibrations due to environment factors like electronic noise. Here, we build upon a robust displacement estimation method for ultrasound elastography and we investigate an iterative motion compensation algorithm, which can detect and remove non-heat induced tissue motion at every step of the ablation procedure. The validation experiments are performed on laboratory induced ablation lesions in ex-vivo tissue. The ultrasound probe is either held by the operator's hand or supported by a robotic arm. We demonstrate the ability to detect and remove non-heat induced tissue motion in both settings. We show that removing extraneous motion helps unmask the effects of heating. Our strain estimation curves closely mirror the temperature changes within the tissue. While previous results in the area of motion compensation were reported for experiments lasting less than 10 seconds, our algorithm was tested on experiments that lasted close to 20 minutes.
We investigated a novel needle visualization using the PA effect to enhance needle-tip tracking. An optical fiber and
laser source are used to generate acoustic waves inside the needle with the PA effect. Acoustic waves are generated
along the needle. Some amount of acoustic energy leaks into the surrounding material. The leakage of acoustic waves is
captured by a conventional US transducer and US channel data collection system. Then, the collected data are converted
to a PA image. The needle-tip can be visualized more clearly in this PA image than a general US brightness mode image.
Targeted contrast agents can improve the sensitivity of imaging systems for cancer detection and monitoring the treatment. In order to accurately detect contrast agent concentration from photoacoustic images, we developed a decomposition algorithm to separate photoacoustic absorption spectrum into components from individual absorbers. In this study, we evaluated novel prostate-specific membrane antigen (PSMA) targeted agents for imaging prostate cancer. Three agents were synthesized through conjugating PSMA-targeting urea with optical dyes ICG, IRDye800CW and ATTO740 respectively. In our preliminary PA study, dyes were injected in a thin wall plastic tube embedded in water tank. The tube was illuminated with pulsed laser light using a tunable Q-switch ND-YAG laser. PA signal along with the B-mode ultrasound images were detected with a diagnostic ultrasound probe in orthogonal mode. PA spectrums of each dye at 0.5 to 20 μM concentrations were estimated using the maximum PA signal extracted from images which are obtained at illumination wavelengths of 700nm-850nm. Subsequently, we developed nonnegative linear least square optimization method along with localized regularization to solve the spectral unmixing. The algorithm was tested by imaging mixture of those dyes. The concentration of each dye was estimated with about 20% error on average from almost all mixtures albeit the small separation between dyes spectrums.
Photoacoustic (PA) imaging is becoming an important tool for various clinical and pre-clinical applications. Acquiring pre-beamformed channel ultrasound data is essential to reconstruct PA images. Accessing these pre-beamformed channel data requires custom hardware to allow parallel beam-forming, and is available for only few research ultrasound platforms. However, post-beamformed radio frequency (RF) data is readily available in real-time and in several clinical and research ultrasound platforms. To broaden the impact of clinical PA imaging, our goal is to devise new PA reconstruction approach based on these post-beamformed RF data. In this paper, we propose to generate PA image by using a single receive focus beamformed RF data. These beamformed RF data are considered as pre-beamformed input data to a synthetic aperture beamforming algorithm, where the focal point per received RF line is a virtual element. The image resolution is determined by the fixed focusing depth as well as the aperture size used in fixed focusing. In addition, the signal-to-noise (SNR) improvement is expected because beamforming is performed twice with different noise distribution. The performance of the proposed method is analyzed through simulation, the practical feasibility is validated experimentally. The results indicate that the post-beamformed RF data has potential to be re-beamformed to a PA image using the proposed synthetic aperture beamformer.
Fusion of video and other imaging modalities is common in modern surgical scenarios to provide surgeons with additional information. Doing so requires the use of interventional guidance equipment and surgical navigation systems to register the tools and devices used in surgery with each other. In this work, we focus explicitly on registering ultrasound with a stereocamera system using photoacoustic markers. Previous work has shown that photoacoustic markers can be used to register three-dimensional ultrasound with video resulting in target registration errors lower than the current available systems. Photoacoustic markers are non-collinear laser spots projected onto some surface. They can be simultaneously visualized by a stereocamera system and in an ultra-sound volume because of the photoacoustic effect. This work replaces the three-dimensional ultrasound volume with images from a single ultrasound image pose. While an ultrasound volume provides more information than an ultrasound image, it has its disadvantages such as higher cost and slower acquisition rate. However, in general, it is difficult to register two-dimensional with three-dimensional spatial data. We propose the use of photoacoustic markers viewed by a convex array ultrasound transducer. Each photoacoustic markers wavefront provides information on its elevational position, resulting in three-dimensional spatial data. This development enhances this methods practicality as convex array transducers are more common in surgical practice than three-dimensional transducers. This work is demonstrated on a synthetic phantom. The resulting target registration error for this experiment was 2.47mm and the standard deviations was 1.29mm, which is comparable to current available systems.
Photoacoustic imaging has broad clinical potential to enhance prostate cancer detection and treatment, yet it is challenged by the lack of minimally invasive, deeply penetrating light delivery methods that provide sufficient visualization of targets (e.g., tumors, contrast agents, brachytherapy seeds). We constructed a side-firing fiber prototype for transurethral photoacoustic imaging of prostates with a dual-array (linear and curvilinear) transrectal ultrasound probe. A method to calculate the surface area and, thereby, estimate the laser fluence at this fiber tip was derived, validated, applied to various design parameters, and used as an input to three-dimensional Monte Carlo simulations. Brachytherapy seeds implanted in phantom, ex vivo, and in vivo canine prostates at radial distances of 5 to 30 mm from the urethra were imaged with the fiber prototype transmitting 1064 nm wavelength light with 2 to 8 mJ pulse energy. Prebeamformed images were displayed in real time at a rate of 3 to 5 frames per second to guide fiber placement and beamformed offline. A conventional delay-and-sum beamformer provided decreasing seed contrast (23 to 9 dB) with increasing urethra-to-target distance, while the short-lag spatial coherence beamformer provided improved and relatively constant seed contrast (28 to 32 dB) regardless of distance, thus improving multitarget visualization in single and combined curvilinear images acquired with the fiber rotating and the probe fixed. The proposed light delivery and beamforming methods promise to improve key prostate cancer detection and treatment strategies.
KEYWORDS: Data acquisition, Software frameworks, Ultrasonography, Transducers, 3D acquisition, Image quality, Sensors, Calibration, 3D image processing, Medical imaging
Acquisition of ultrasound (US) pre-beamformed radio-frequency (RF) data is essential in photoacoustic (PA) imaging research. Moreover, 3D PA imaging can provide volumetric information for a target of interest. However, existing 3D PA systems require specifically designed motion stages, an ultrasound scanner and a data acquisition system to collect 3D pre-beamformed RF data. These systems are incompatible with clinical ultrasound systems and are difficult to reconfigure and generalize to other PA research. To overcome these limitations, we proposed and developed a new software framework for spatially-tracked pre-beamformed RF data acquisition with a conventional 2D ultrasound transducer and external tracking device. We upgraded our previous software framework using task-classes of OpenIGTLinkMUSiiC2.0 and MUSiiCToolkit 2.0. We also improved our MUSiiCToolKit 2.0 by adding MUSiiCNotes 2.0, a collection of specific task-classes for US research. MUSiiC-DAQServer2.0, MUSiiC-TrackerServer and MUSiiCSync are the main modules of our software framework. Spatially-tracked 2D PA frames are collected efficiently using this software framework for 3D PA research and imaging. The software modules of our software framework are based on the concept of network distributed modules and can simultaneously support multiple-client connections via TCP/IP network. In addition, the collected 2D PA frames are compatible with other MUSiiCToolKit 2.0 modules such as MUSiiC-Beamform, MUSiiC -BMode and MUSiiC - ImageViewer modules. These aspects of our software framework allow us to easily reconfigure and customize our system to other PA or US research.
Optoacoustic sensing is a hybrid technique that combines the advantages of high sensing depth of ultrasound with
contrast of optical absorption. In this study a miniature optoacoustic probe that can characterize the target properties
located at the distal end of a catheter is investigated. The probe includes an optical fiber to illuminate the target with the
pulsed laser light and a hydrophone to detect the generated optoacoustic signal. The probe is designed for the forwardsensing
and therefore the acoustic signal propagates along the tube before being detected. Due to the circular geometry,
the waves inside the tube are highly complex. A three dimensional numerical simulation is performed to model the
optoacoustic wave generation and propagation inside the water filled cylindrical tubes. The effect of the boundary
condition, tube diameter and target size on the detected signal is systematically evaluated. A prototype of the probe is
made and tested for detecting an absorbing target inside a 2mm diameter tube submerged in water. The preliminary
experimental results corresponding to the simulation is acquired. Although many different medical applications for this
miniature probe may exist, our main focus is on detecting the occlusion inside the ventricular shunts. These catheters are
used to divert the excess cerebrospinal fluid to the absorption site and regulate inter cranial pressure of hydrocephalous
patients. Unfortunately the malfunction rate of these catheters due to blockage is very high. This sensing tool could
locate the occluding tissue non-invasively and can potentially characterize the occlusion composites by scanning at
different wavelengths of the light.
In recent years, various methods have been developed to improve ultrasound based interventional tool tracking. However, none of them has yet provided a solution that effectively solves the tool visualization and mid-plane localization accuracy problem and fully meets the clinical requirements. Our previous work has demonstrated a new active ultrasound pattern injection system (AUSPIS), which integrates active ultrasound transducers with the interventional tool, actively monitors the beacon signals and transmits ultrasound pulses back to the US probe with the correct timing. Ex vivo and in vivo experiments have proved that AUSPIS greatly improved tool visualization, and provided tool-tip localization accuracy of less than 300 μm. In the previous work, the active elements were made of piezoelectric materials. However, in some applications the high driving voltage of the piezoelectric element raises safety concerns. In addition, the metallic electrical wires connecting the piezoelectric element may also cause artifacts in CT and MR imaging. This work explicitly focuses on an all-optical active ultrasound element approach to overcome these problems. In this approach, the active ultrasound element is composed of two optical fibers - one for transmission and one for reception. The transmission fiber delivers a laser beam from a pulsed laser diode and excites a photoacoustic target to generate ultrasound pulses. The reception fiber is a Fabry–Pérot hydrophone. We have made a prototype catheter and performed phantom experiments. Catheter tip localization, mid-plan detection and arbitrary pattern injection functions have been demonstrated using the all-optical AUSPIS.
We present a novel approach to photoacoustic imaging of prostate brachytherapy seeds utilizing an existing urinary catheter for transurethral light delivery. Two canine prostates were surgically implanted with brachyther- apy seeds under transrectal ultrasound guidance. One prostate was excised shortly after euthanasia and fixed in gelatin. The second prostate was imaged in the native tissue environment shortly after euthanasia. A urinary catheter was inserted in the urethra of each prostate. A 1-mm core diameter optical fiber coupled to a 1064 nm Nd:YAG laser was inserted into the urinary catheter. Light from the fiber was either directed mostly parallel to the fiber axis (i.e. end-fire fire) or mostly 90° to the fiber axis (i.e. side-fire fiber). An Ultrasonix SonixTouch scanner, transrectal ultrasound probe with curvilinear (BPC8-4) and linear (BPL9-5) arrays, and DAQ unit were utilized for synchronized laser light emission and photoacoustic signal acquisition. The implanted brachytherapy seeds were visualized at radial distances of 6-16 mm from the catheter. Multiple brachytherapy seeds were si- multaneously visualized with each array of the transrectal probe using both delay-and-sum (DAS) and short-lag spatial coherence (SLSC) beamforming. This work is the first to demonstrate the feasibility of photoacoustic imaging of prostate brachytherapy seeds using a transurethral light delivery method.
Fusion of video and other imaging modalities is common in modern surgical procedures to provide surgeons with additional information that can provide precise surgical guidance. An example of such uses interventional guidance equipment and surgical navigation systems to register the tools and devices used in surgery with each other. In this work, we focus explicitly on registering three-dimensional ultrasound with a stereocamera system. These surgical navigation systems often use optical or electromagnetic trackers. However, both of these tracking systems have various drawbacks leading to target registration errors of approximately 3mm. Previous work has shown that photoacoustic markers can be used to register three-dimensional ultrasound with video resulting in target registration errors which are much lower than the current state of the art. This work extends this idea by generating multiple photoacoustic markers concurrently as opposed to the sequential method used in the previous work. This development greatly enhances the acquisition time by a factor equal to the number of concurrently generated photoacoustic markers. This work is demonstrated on a synthetic phantom and an ex vivo porcine kidney phantom. The resulting target registration errors for these experiments ranged from 840 to 1360 μm and standard deviations from 370 to 640 μm.
Photoacoustic (PA) imaging is an emerging medical imaging modality that relies on the absorption of optical energy and the subsequent emission of acoustic waves that are detected with a conventional ultrasound probe. PA images are susceptible to background noise artifacts that reduce the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR). We investigated spatial-angular compounding of PA images to enhance these image qualities. Spatial-angular compounding was implemented by averaging multiple PA images acquired as an ultrasound probe was rotated about the elevational axis with the laser beam and PA target fixed in the same location. An external tracking system was used to provide the position and orientation (i.e. pose) information of each PA image. Based on this pose information, frames in similar elevational planes were filtered from the acquired image data and compounded using one of two methods. One method registered overlapping signals between frames prior to compounding (using the pose information), while the second method omitted this spatial registration step. These two methods were applied to pre-beamformed RF, beamformed RF, and envelope-detected data, resulting in six different compounding pipelines. Compounded PA images with similar lateral resolution to a single reference image had factors of 1.1 - 1.6, 2.0 - 11.1, and 2.0 - 11.1 improvements in contrast, CNR, and SNR, respectively, when compared to the reference image. These improvements depended on the amount of relative motion between the reference image and the images that were compounded. The inclusion of spatial registration prior to compounding preserved lateral resolution and signal location when the relative rotations about the elevation axis were 3.5° or less for images that were within an elevational distance of 2.5 mm from the reference image, particularly when the method was applied to the enveloped-detected data. Results indicate that spatial-angular compounding has the potential to improve image quality for a variety of photoacoustic imaging applications.
Ventricular catheters are used to treat hydrocephalus by diverting the excess of the cerebrospinal fluid (CSF) to the reabsorption site so as to regulate the intracranial pressure. The failure rate of these shunts is extremely high due to the ingrown tissue that blocks the CSF flow. We have studied a method to image the occlusion inside the shunt through the skull. In this approach the pulsed laser light coupled to the optical fiber illuminate the occluding tissue inside the catheter and an external ultrasound transducer is applied to detect the generated photoacoustic signal. The feasibility of this method is investigated using a phantom made of ovis aries brain tissue and adult human skull. We were able to image the target inside the shunt located 20mm deep inside the brain through about 4mm thick skull bone. This study could lead to the development of a simple, safe and non-invasive device for percutaneous restoration of patency to occluded shunts. This will eliminate the need of the surgical replacement of the occluded catheters which expose the patients to risks including hemorrhage and brain injury.
KEYWORDS: Photoacoustic spectroscopy, Data acquisition, Ultrasonography, Photoacoustic imaging, Imaging systems, Software frameworks, Data communications, Data conversion, Laser systems engineering, Visualization
Acquisition of pre-beamformed data is essential in advanced imaging research studies such as adaptive beamforming,
synthetic aperture imaging, and photoacoustic imaging. Ultrasonix Co. has developed such a data acquisition device for
pre-beamformed data known as the SONIX-DAQ, but data can only be downloaded and processed offline rather than
streamed in real-time. In this work, we developed a software framework to extend the functionality of the SONIX-DAQ
for streaming and processing data in near real-time. As an example, we applied this functionality to our previous work of
visualizing photoacoustic images of prostate brachytherapy seeds. In this paper, we present our software framework,
applying it to a real-time photoacoustic imaging system, including real-time data collection and data-processing software
modules for brachytherapy treatment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.