Electromagnetic (EM) tracking systems are often used for real time navigation of medical tools in an Image
Guided Therapy (IGT) system. They are specifically advantageous when the medical device requires tracking
within the body of a patient where line of sight constraints prevent the use of conventional optical tracking. EM
tracking systems are however very sensitive to electromagnetic field distortions. These distortions, arising from
changes in the electromagnetic environment due to the presence of conductive ferromagnetic surgical tools or
other medical equipment, limit the accuracy of EM tracking, in some cases potentially rendering tracking data
unusable. We present a mapping method for the operating region over which EM tracking sensors are used,
allowing for characterization of measurement errors, in turn providing physicians with visual feedback about
measurement confidence or reliability of localization estimates.
In this instance, we employ a calibration phantom to assess distortion within the operating field of the
EM tracker and to display in real time the distribution of measurement errors, as well as the location and
extent of the field associated with minimal spatial distortion. The accuracy is assessed relative to successive
measurements. Error is computed for a reference point and consecutive measurement errors are displayed relative
to the reference in order to characterize the accuracy in near-real-time. In an initial set-up phase, the phantom
geometry is calibrated by registering the data from a multitude of EM sensors in a non-ferromagnetic ("clean")
EM environment. The registration results in the locations of sensors with respect to each other and defines
the geometry of the sensors in the phantom. In a measurement phase, the position and orientation data from
all sensors are compared with the known geometry of the sensor spacing, and localization errors (displacement
and orientation) are computed. Based on error thresholds provided by the operator, the spatial distribution of
localization errors are clustered and dynamically displayed as separate confidence zones within the operating
region of the EM tracker space.
Automatic segmentation is a prerequisite to efficiently analyze the large amount of image data produced by modern imaging
modalities, e.g., computed tomography (CT), magnetic resonance (MR) and rotational X-ray volume imaging. While many
segmentation approaches exist, most of them are developed for a single, specific imaging modality and a single organ. In
clinical practice, however, it is becoming increasingly important to handle multiple modalities: First due to a case-specific
choice of the most suitable imaging modality (e.g. CT versus MR), and second in order to integrate complementary data
from multiple modalities. In this paper, we present a single, integrated segmentation framework which can easily be
adapted to a range of imaging modalities and organs. Our algorithm is based on shape-constrained deformable models. Key
elements are (1) a shape model representing the geometry and variability of the target organ of interest, (2) spatially varying
boundary detection functions representing the gray value appearance of the organ boundaries for the specific imaging
modality or protocol, and (3) a multi-stage segmentation approach. Focussing on fully automatic heart segmentation, we
present evaluation results for CT,MR (contrast enhanced and non-contrasted), and rotational X-ray angiography (3-D RA).
We achieved a mean segmentation error of about 0.8mm for CT and (non-contrasted) MR, 1.0mm for contrast-enhanced
MR and 1.3mm for 3-D RA, demonstrating the success of our segmentation framework across modalities.
Imaging techniques try to identify patients who may respond to cardiac resynchronization therapy (CRT). However, it
may be clinically more useful to identify patients for whom CRT would not be beneficial as the procedure would not be
indicated for this group. We developed a novel, clinically feasible and technically-simple echocardiographic
dyssynchrony index and tested its negative predictive value. Subjects with standard indications for CRT had echo preand
post-device implantation. Atrial-ventricular dyssynchrony was defined as a left ventricular (LV) filling time of
<40% of the cardiac cycle. Intra-ventricular dyssynchrony was quantified as the magnitude of LV apical rocking. The
apical rocking was measured using tissue displacement estimates from echo data. In a 4-chamber view, a region of
interest was positioned within the apical end of the middle segment within each wall. Tissue displacement curves were
analyzed with custom software in MATLAB. Rocking was quantified as a percentage of the cardiac cycle over which the
displacement curves showed discordant behavior and classified as non-significant for values <35%. Validation in 50
patients showed that absence of significant LV apical rocking or atrial-ventricular dyssynchrony was associated with
non-response to CRT. This measure may therefore be useful in screening to avoid non-therapeutic CRT procedures.
Knowledge of patient-specific cardiac anatomy is required for catheter-based ablation in epicardial ablation
procedures such as ventricular tachycardia (VT) ablation interventions. In particular, knowledge of
critical structures such as the coronary arteries is essential to avoid collateral damage. In such ablation
procedures, ablation catheters are brought in via minimally-invasive subxiphoid access. The catheter is
then steered to ablation target sites on the left ventricle (LV). During the ablation and catheter navigation
it is of vital importance to avoid damage of coronary structures. Contrast-enhanced rotational X-ray
angiography of the coronary arteries delivers a 3D impression of the anatomy during the time of intervention.
Vessel modeling techniques have been shown to be able to deliver accurate 3D anatomical models
of the coronary arteries. To simplify epicardial navigation and ablation, we propose to overlay coronary
arterial models, derived from rotational X-ray angiography and vessel modeling, onto real-time X-ray
fluoroscopy. In a preclinical animal study, we show that overlay of intra-operatively acquired 3D arterial
models onto X-ray helps to place ablation lesions at a safe distance from coronary structures. Example
ablation lesions have been placed based on the model overlay at reasonable distances between key arterial
vessels and on top of marginal branches.
We present and validate image-based speckle-tracking calipers for quantification of tissue deformation and rotation
in dynamic cardiovascular phantom models. Lagrangian strain was computed from the change in distance
between caliper regions-of-interest (ROIs) positioned within the wall of a pulsatile phantom and compared with
reference measurements derived from cardiac CT imaging. In a torsion phantom, rotational tissue excursion
in a 2D plane was estimated and compared with reference values from CT-scan data. Tissue deformation and
rotation measurements correlated well with their respective reference measurements. Our algorithm is capable
of estimating strain and rotation from distinct tissue regions without requiring explicit cardiac border detection,
a step which can be especially challenging in patients with poor acoustic windows.
Interventional cardiac electrophysiology (EP) procedures are typically performed under X-ray fluoroscopy for
visualizing catheters and EP devices relative to other highly-attenuating structures such as the thoracic spine
and ribs. These projections do not however contain information about soft-tissue anatomy and there is a
recognized need for fusion of conventional fluoroscopy with pre-operatively acquired cardiac multislice computed
tomography (MSCT) volumes. Rapid 2D-3D integration in this application would allow for real-time visualization
of all catheters present within the thorax in relation to the cardiovascular anatomy visible in MSCT. We present a
method for rapid fusion of 2D X-ray fluoroscopy with 3DMSCT that can facilitate EP mapping and interventional
procedures by reducing the need for intra-operative contrast injections to visualize heart chambers and specialized
systems to track catheters within the cardiovascular anatomy. We use hardware-accelerated ray-casting to
compute digitally reconstructed radiographs (DRRs) from the MSCT volume and iteratively optimize the rigid-body
pose of the volumetric data to maximize the similarity between the MSCT-derived DRR and the intra-operative
X-ray projection data.
KEYWORDS: Image segmentation, Heart, Atrial fibrillation, Data modeling, Magnetic resonance imaging, 3D modeling, X-ray imaging, Veins, Data acquisition, 3D acquisition
Catheter-based ablation in the left atrium and pulmonary veins (LAPV) for treatment of atrial fibrillation
in cardiac electrophysiology (EP) are complex and require knowledge of heart chamber anatomy. Electroanatomical
mapping (EAM) is typically used to define cardiac structures by combining electromagnetic
spatial catheter localization with surface models which interpolate the anatomy between EAM point locations
in 3D. Recently, the incorporation of pre-operative volumetric CT or MR data sets has allowed for more detailed
maps of LAPV anatomy to be used intra-operatively. Preoperative data sets are however a rough guide
since they can be acquired several days to weeks prior to EP intervention. Due to positional and physiological
changes, the intra-operative cardiac anatomy can be different from that depicted in the pre-operative data.
We present an application of contrast-enhanced rotational X-ray imaging for CT-like reconstruction of 3D
LAPV anatomy during the intervention itself. Depending on the heart size a single or two selective contrastenhanced
rotational acquisitions are performed and CT-like volumes are reconstructed with 3D filtered back
projection. In case of dual injection, the two volumes depicting the left and right portions of the LAPV are
registered and fused. The data sets are visualized and segmented intra-procedurally to provide anatomical
data and surface models for intervention guidance. Our results from animal and human experiments indicate
that the anatomical information from intra-operative CT-like reconstructions compares favorably with preacquired
imaging data and can be of sufficient quality for intra-operative guidance.
This work presents an integrated system for multimodality image guidance of minimally invasive medical procedures.
This software and hardware system offers real-time integration and registration of multiple image streams with
localization data from navigation systems. All system components communicate over a local area Ethernet network,
enabling rapid and flexible deployment configurations. As a representative configuration, we use X-ray fluoroscopy
(XF) and ultrasound (US) imaging. The XF imaging system serves as the world coordinate system, with gantry geometry
derived from the imaging system, and patient table position tracked with a custom-built measurement device using linear
encoders. An electromagnetic (EM) tracking system is registered to the XF space using a custom imaging phantom that
is also tracked by the EM system. The RMS fiducial registration error for the EM to X-ray registration was 2.19 mm,
and the RMS target registration error measured with an EM-tracked catheter was 8.81 mm. The US image stream is
subsequently registered to the XF coordinate system using EM tracking of the probe, following a calibration of the US
image within the EM coordinate system. We present qualitative results of the system in operation, demonstrating the
integration of live ultrasound imaging spatially registered to X-ray fluoroscopy with catheter localization using
electromagnetic tracking.
Image-guided therapy for electrophysiology applications requires integration of pre-procedural volumetric imaging
data with intra-procedural electroanatomical mapping (EAM) information. Existing methods for fusion of
EAM and imaging data are based on fiducial landmark identification or point-to-surface distance minimization
algorithms, both of which require detailed EAM mapping. This mapping procedure requires specific selection
of points on the endocardial surface and this point acquisition process is skill-dependent, time-consuming and
labor-intensive. The mapping catheter tip must first be navigated to a landmark on the endocardium, tip contact
must be verified, and finally the tip location must be explicitly annotated within the EAM data record. This
process of individual landmark identification and annotation must be repeated carefully >50 times to define
endocardial and other vascular surfaces with sufficient detail for iterated-closest-point (ICP)-based registration.
To achieve this, 30-45 minutes of mapping just for the registration procedure can be necessary before the interventional
component of the patient study begins. Any acquired EAM point location that is not in contact with
the chamber surface can adversely impact the quality of registration. Significantly faster point acquisition can be
achieved by recording catheter tip locations automatically and continuously without requiring explicit navigation
to and annotation of fiducial landmarks. We present a novel registration framework in which EAM locations
are rapidly acquired and recorded in a continuous, untriggered fashion while the electrophysiologist manipulates
the catheter tip within the heart. Results from simulation indicate that mean registration errors are on the order
of 3-4mm, comparable in magnitude to conventional registration procedures which take significantly longer to
perform. Qualitative assessment in clinical data also reflects good agreement with physician expectations.
A novel approach is presented which combines rotational X-ray imaging, real-time fluoroscopic X-ray imaging and real-time catheter tracking for improved guidance in interventional electrophysiology procedures. Rotational X-ray data and real-time fluoroscopy data obtained from a Philips FD10 flat detector X-ray system and are registered with real-time localization data from catheter tracking equipment. The visualization and registration of rotational X-ray data with catheter location data enables the physician to better appreciate the underlying anatomy of interest in three dimensions and to navigate the interventional or mapping device more effectively. Furthermore, the fused information streams from rotational X-ray, real-time X-ray fluoroscopy and real-time three-dimensional catheter locations offer a direct imaging feedback during interventions, facilitating navigation and potentially improving clinical outcome. With the technique one is able to reduce the fluoroscopic time required in a procedure, since the catheter is registered and visualized with off-line projection data from various view angles. We show a demonstrator which integrates, registers, and visualizes the various data streams. It can be implemented in the clinical work-flow with reasonable effort. Results are presented based on an experimental setup. Furthermore, the robustness and the accuracy of this technique have been determined based on phantom studies.
In carotid plaque imaging, MRI provides exquisite soft-tissue characterization, but lacks the temporal resolution for tissue strain imaging that real-time 3D ultrasound (3DUS) can provide. On the other hand, real-time 3DUS currently lacks the spatial resolution of carotid MRI. Non-rigid alignment of ultrasound and MRI data is essential for integrating complementary morphology and biomechanical information for carotid vascular assessment. We assessed non-rigid registration for fusion of 3DUS and MRI carotid data based on deformable models which are warped to maximize voxel similarity. We performed validation in vitro using isolated carotid artery imaging. These samples were subjected to soft-tissue deformations during 3DUS and were imaged in a static configuration with standard MR carotid pulse sequences. Registration of the source ultrasound sequences to the target MR volume was performed and the mean absolute distance between fiducials within the ultrasound and MR datasets was measured to determine inter-modality alignment quality. Our results indicate that registration errors on the order of 1mm are possible in vitro despite the low-resolution of current generation 3DUS transducers. Registration performance should be further improved with the use of higher frequency 3DUS prototypes and efforts are underway to test those probes for in vivo 3DUS carotid imaging.
Spectral-Domain Optical Coherence Tomography (SDOCT) allows for in-vivo video-rate investigation of biomedical
tissue depth structure with the purpose of non-invasive optical diagnostics. In ophthalmic applications, it has been
suggested that Optical Coherence Tomography (OCT) can be used for diagnosis of glaucoma by measuring the thickness
of the Retinal Nerve Fiber Layer (RNLF). We present here an automated method for determining the RNFL thickness
map from a 3-D dataset. Boundary detection has been studied since the early days of computer vision and image
processing, and different approaches have been proposed. The procedure described here is based on edge detection using
a deformable spline (snake) algorithm. As the snake seeks to minimize its overall energy, its shape will converge on the
image contour, the boundaries of the nerve fiber layer. In general, the snake is not allowed to travel too much, and
therefore, proper initialization is required. The snake parameters, elasticity, rigidity, viscosity, and external force weight
are set to allow the snake to follow the boundary for a large number of retinal topographies. The RNFL thickness map is
combined with an integrated reflectance map of the retina and retinal cross-sectional images (OCT movie), to provide
the ophthalmologist with a familiar image for interpreting the OCT data. The video-rate capabilities of our SDOCT
system allow for mapping the true retinal topography since the motion artifacts are significantly reduced as compared to
slower time-domain systems.
Spectral-Domain Optical Coherence Tomography (SDOCT) allows for in-vivo video-rate investigation of biomedical tissue depth structure intended for non-invasive optical diagnostics. It has been suggested that OCT can be used for di-agnosis of glaucoma by measuring the thickness of the Retinal Nerve Fiber Layer (RNLF). We present an automated method for determining the RNFL thickness from a 3-D dataset based on edge detection using a deformable spline algo-rithm. The RNFL thickness map is combined with an integrated reflectance map and retinal cross-sectional images to provide the ophthalmologist with a familiar image for interpreting the OCT data. The video-rate capabilities of our SDOCT system allow for mapping the true retinal topography since motion artifacts are significantly reduced as com-pared to slower time-domain systems. Combined with Doppler Velocimetry, SDOCT also provides information on retinal blood flow dynamics. We analyzed the pulsatile nature of the bidirectional flow dynamics in an artery-vein pair for a healthy volunteer at different locations and for different blood vessel diameters. The Doppler phase shift is determined as the phase difference at the same point of adjacent depth profiles, and is integrated over the area delimited by two circles corresponding to the blood vessels location. Its temporal evolution clearly shows the blood flow pulsatile nature, the cardiac cycle, in both artery and vein. The artery is identified as having a stronger variation of the integrated phase shift. We observe that artery pulsation is always easily detectable, while vein pulsation seems to depend on the veins diameter.
Low-power, portable ultrasound imaging devices are well- suited for the diagnostic requirements of healthcare delivery on the modern battlefield. The non-invasiveness and good spatiotemporal resolution of ultrasonography allow for early detection of changes in tissue anatomy and material behavior that signal the presence of injury from exposure to biological hazards or disease processes that can jeopardize the performance of personnel in the field. This potential has not been fully realized however due to the presence of image degrading factors that make ultrasound imagery notoriously difficult to interpret. To detect and quantify tissue pathology from ultrasound, anatomical boundaries and tissue deformation in the images must be estimated accurately; this requires image processing in a way that suppresses noise while retaining salient tissue borders in the imagery. We focus here on detecting abnormalities in cross-sections of the carotid vessel boundary extraction and deformation tracking over time for the purpose of detecting abnormal tissue characteristics. We validate this concept in noisy simulated images derived from finite-element models of normal and abnormal vessel cross-sections, and in real ultrasound images from a human subject.
On the battlefield of the future, it may become feasible for medics to perform, via application of new biomedical technologies, more sophisticated diagnoses and surgery than is currently practiced. Emerging biomedical technology may enable the medic to perform laparoscopic surgical procedures to remove, for example, shrapnel from injured soldiers. Battlefield conditions constrain the types of medical image acquisition and interpretation which can be performed. Ultrasound is the only viable biomedical imaging modality appropriate for deployment on the battlefield -- which leads to image interpretation issues because of the poor quality of ultrasound imagery. To help overcome these issues, we develop and implement a method of image enhancement which could aid non-experts in the rapid interpretation and use of ultrasound imagery. We describe an energy minimization approach to finding boundaries in medical images and show how prior information on edge orientation can be incorporated into this framework to detect tissue boundaries oriented at a known angle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.