PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
One of the indicators of early lung cancer is a color change in airway mucosa. Bronchoscopy of the major airways can provide high-resolution color video of the airway tree's mucosal surfaces. In addition, 3D MDCT chest images provide 3D structural information of the airways. Unfortunately, the bronchoscopic video contains no explicit 3D structural and position information, and the 3D MDCT data captures no color or textural information of the mucosa. A fusion of the topographical information from the 3D CT data and the color information from the bronchoscopic video, however, enables realistic 3D visualization, navigation, localization, and quantitative color-topographic analysis of the airways. This paper presents a method for topographic airway-mucosal surface mapping from bronchoscopic video onto 3D MDCT endoluminal views. The method uses registered video images and CT-based virtual endoscopic renderings of the airways. The visibility and depth data are also generated by the renderings. Uniform sampling and over-scanning of the visible triangles are done before they are packed into a texture space. The texels are then re-projected onto video images and assigned color values based on depth and illumination data obtained from renderings. The texture map is loaded into the rendering engine to enable real-time navigation through the combined 3D CT surface and bronchoscopic video data. Tests were performed on pre-recorded bronchoscopy patient video and associated 3D MDCT scans. Results show that we can effectively accomplish mapping over a continuous sequence of airway images spanning several generations of airways.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this study is to develop a virtual colonoscopy (VC) workstation that supports both CT (computed tomography) and MR (magnetic resonance) imaging procedures. The workflow should be optimized and be able to take advantage of both image modalities. The technological break through is at the real-time volume rendering of spatial-intensity-inhomogeneous MR images to achieve high quality 3D endoluminal view. VC aims at visualizing CT or MR tomography images for detection of colonic polyp and lesion. It is also called as CT/MR colonography based on the imaging modality that is employed. The published results of large scale clinical trial demonstrated more than 90% of sensitivity on polyp detection for certain CT colonography (CTC) workstation. A drawback of the CT colonoscopy is the radiation exposure. MR colonography (MRC) is free from the X-ray radiation. It achieved almost 100% specificity for polyp detection in published trials. The better tissue contrast in MR image allows the accurate diagnosis of inflammatory bowel disease also, which is usually difficult in CTC. At present, most of the VC workstations are designed for CT examination. They are not able to display multi-sequence MR series concurrently in a single application. The automatic correlation between 2D and 3D view is not available due to the difficulty of 3D model building for MR images. This study aims at enhancing a commercial VC product that was successfully used for CTC to equally support dark-lumen protocol MR procedure also.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The endoluminal brachytherapy of peripherally located bronchial carcinoma is difficult because of the complexity to position an irradiation catheter led by a bronchoscope to a desired spot inside a human lung. Furthermore the size of the bronchoscope permits only rarely the insertion of a catheter into the fine segment bronchi. We are developing an image-guided navigation system which indicates a path for guidance to the desired bronchus. Thereby a thin catheter with an enclosed navigation probe can be led up directly to the target bronchus, either by the use of the video of the bronchoscope or by the use of virtual bronchoscopy. Because of the thin bronchi and their moving soft tissue, the navigation system has to be very precise. This accuracy is reached by a gradually registering navigation component which improves the accuracy in the course of the intervention through mapping the already covered path to the preoperatively generated graph based bronchial tree description. The system includes components for navigation, segmentation, preoperative planning, and intraoperative guidance. Furthermore the visualization of the path can be adapted to the lung specialist's habits (video of bronchoscope, 2D, 3D, virtual bronchoscopy etc.).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We are developing an augmented reality (AR) image guidance system in which information derived from medical images is overlaid onto a video view of the patient. The centerpiece of the system is a head-mounted display custom fitted with two miniature color video cameras that capture the stereo view of the scene. Medical graphics is overlaid onto the video view and appears firmly anchored in the scene, without perceivable time lag or jitter. We have been testing the system for different clinical applications. In this paper we discuss minimally invasive thoracoscopic spine surgery as a promising new orthopedic application. In the standard approach, the thoracoscope - a rigid endoscope - provides visual feedback for the minimally invasive procedure of removing a damaged disc and fusing the two neighboring vertebrae. The navigation challenges are twofold. From a global perspective, the correct vertebrae on the spine have to be located with the inserted instruments. From a local perspective, the actual spine procedure has to be performed precisely. Visual feedback from the thoracoscope provides only limited support for both of these tasks. In the augmented reality approach, we give the surgeon additional anatomical context for the navigation. Before the surgery, we derive a model of the patient's anatomy from a CT scan, and during surgery we track the location of the surgical instruments in relation to patient and model. With this information, we can help the surgeon in both the global and local navigation, providing a global map and 3D information beyond the local 2D view of the thoracoscope. Augmented reality visualization is a particularly intuitive method of displaying this information to the surgeon. To adapt our augmented reality system to this application, we had to add an external optical tracking system, which works now in combination with our head-mounted tracking camera. The surgeon's feedback to the initial phantom experiments is very positive.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The course and the success of an endovascular intervention can be influenced by the choice of the guidewire and primary by its ability to access to the lesion. The simulation of catheterism in complex vasculature is of main interest to aid the surgery planning. The overall objective of the simulation is to improve the choice of guidewire (with the simulation of its intrinsic features: torque, shape, rigidity, elasticity) as well as its navigation within patient specific vasculature. We propose a new approach for the simulation of guidewire navigation. It is based on: (i) the modeling of guidewire using "multi-body" approach and the representation of its internal characteristics, (ii) the modeling of artery as a surface mesh, (iii) the simulation of the interactions of the guidewire with its environment (artery and clinician). In this study, strength and elasticity of the guidewire are modeled. Only the "push" action performed by the clinician is considered. The global behavior of the guidewire is simulated by means of retraction and relaxation processes. To interact with the artery walls, methods based on the graphics hardware have been developed (i) to detect the collisions between the guidewire and the artery walls (ii) to find the direction of the retraction process which define the local reaction of the guidewire. All these methods have been tested in a qualitative validation on a patient vasculature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ribs within computed tomography (CT) images form curved structures intersecting the axial plane at oblique angles. Rib metastases and other pathologies of the rib are apparent in CT images. Analysis of the ribs using conventional 2D axial slice viewing involves manually tracking them through multiple slices. 3D visualization of the ribs also has drawbacks due to occlusion. Examination of a single rib may require repositioning the viewpoint several times in order to avoid other ribs. We propose a novel visualization method that eliminates rib curvatures by straightening each rib along its centerline. This reduces both 2D and 3D viewing complexities. Our method is based upon first segmenting and extracting the centerlines of each rib. These steps are done through a tracing based segmentation. Next, the centerlines are refined to a smoother contour. Each centerline is then used to resample and digitally straighten each rib. The result is a simplified volume containing only the straightened ribs, which can be quickly examined both in 3D and by scrolling through a series of about 40 slices. Additionally, a projection of the image can yield a single 2D image for examination. The method was tested on chest CT images obtained from patients both positive and negative for rib metastases. Running time was less than 15 seconds per dataset. Preliminary results demonstrate the effectiveness of the visualization in detecting and delineating these metastases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern multi-slice CT (MSCT) scanners allow acquisitions of 3D data sets covering the complete heart at different phases of the cardiac cycle. This enables the physician to non-invasively study the dynamic behavior of the heart, such as wall motion artifacts. To this end an interactive 4D visualization of the heart in motion is desirable. However, the application of well-known volume rendering algorithms enforces considerable sacrifices in terms of image quality to ensure interactive frame rates, even when accelerated by standard graphics processors (GPUs). Thereby, the performance of pure CPU implementations of direct volume rendering algorithms is limited even for moderate volume sizes by both the number of required computations and the available memory bandwidth. Despite of offering higher computational performance and more memory bandwidth GPU accelerated implementations cannot provide interactive visualizations of large 4D data sets since data sets that do not fit into the onboard graphics memory are often not handled efficiently. In this paper we present a software architecture for GPU-based direct volume rendering algorithms that allows the interactive high-quality visualization of large medical time series data sets. In contrast to other work, our architecture exploits the complete memory hierarchy for high cache and bandwidth efficiency. Additionally, several data-dependent techniques are incorporated to reduce the amount of volume data to be transferred and rendered. None of these techniques sacrifices image quality in order to improve speed. By applying the method to several multi phase MSCT cardiac data sets we show that we can achieve interactive frame rates on currently available standard PC hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Despite the increasing interest in three-dimensional (3D) visualization, rendering algorithms still suffer from high numerical complexity and large memory requirements. With the continuously increasing volume of medial imaging data, fast visualization algorithms become crucial. Powerful mathematical techniques based on the wavelet transform promise to provide efficient multi-resolution visualization algorithms, optimizing hence 3D rendering. Maximum Intensity Projection (MIP) is a 3D rendering algorithm that is used to visualize high-intensity structures within volumetric data. At each pixel the highest data value, which is encountered along a corresponding viewing ray, is depicted. In this paper, we propose a fast MIP 3D rendering that is based on a new hierarchical data representation. The proposed approach uses on a new morphological wavelet decomposition that allows for fast initial rendering and progressive subsequent refinements. Our method includes a pre-processing step that is based on a non-linear wavelet representation in order to achieve efficient data compression and storage. It results in a very fast visualization algorithm. The rendering speed-up results from removing cells that do not contribute to any MIP projection and from an innovative storage scheme of the volume cells. The proposed algorithm gives very promising results. Very good MIP projections can be obtained with less than 20% of the volumetric data. This makes our algorithm very competitive with the best MIP methods proposed so far in the literature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer-based 3D atlases allow an interactive exploration of the human body. However, in most cases such 3D atlases are derived from one single individual, and therefore do not regard the variability of anatomical structures concerning their shape and size. Since the geometric variability across humans plays an important role in many medical applications, our goal is to develop a framework of an anatomical atlas for representation and visualization of the variability of selected anatomical structures. The basis of the project presented is the VOXEL-MAN atlas of inner organs that was created from the Visible Human data set. For modeling anatomical shapes and their variability we utilize "m-reps" which allow a compact representation of anatomical objects on the basis of their skeletons. As an example we used a statistical model of the kidney that is based on 48 different variants. With the integration of a shape description into the VOXEL-MAN atlas it is now possible to query and visualize different shape variations of an organ, e.g. by specifying a person's age or gender. In addition to the representation of individual shape variants, the average shape of a population can be displayed. Besides a surface representation, a volume-based representation of the kidney's shape variants is also possible. It results from the deformation of the reference kidney of the volume-based model using the m-rep shape description. In this way a realistic visualization of the shape variants becomes possible, as well as the visualization of the organ's internal structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A framework for real-time visualization of a tumor-influenced lung dynamics is presented in this paper. This framework potentially allows clinical technicians to visualize in 3D the morphological changes of lungs under different breathing conditions. Consequently, this technique may provide a sensitive and accurate assessment tool for pre-operative and intra-operative clinical guidance. The proposed simulation method extends work previously developed for modeling and visualizing normal 3D lung dynamics. The model accounts for the changes in the regional lung functionality and the global motor response due to the presence of a tumor. For real-time deformation purposes, we use a Green's function (GF), a physically based approach that allows real-time multi-resolution modeling of the lung deformations. This function also allows an analytical estimation of the GF's deformation parameters from the 4D lung datasets at different level-of-details of the lung model. Once estimated, the subject-specific GF facilitates the simulation of tumor-influenced lung deformations subjected to any breathing condition modeled by a parametric Pressure-Volume (PV) relation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reconstruction of surface from contours in medical images is often used for modeling. Contouring is usually performed using a single cross-sectional orientation. A potentially more efficient and accurate approach is to use two or more sets of orthogonal contours. In this case, a computational algorithm is needed for reconstructing surface from sets of orthogonal contours. The orthogonal contours are transformed into mesh of 3D polygons. The contours are then resampled using spline interpolation. Finally all possible triangulations for each polygon are evaluated to obtain triangulation. An optimal is selected to reconstruct patch according to a minimum area criteria with constrain on the dihedron angle. All reconstructed patches are combined together to produce a reconstructed surface from orthogonal contours. This method was found to produce surfaces with a smooth and highly realistic appearance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A successful surface based image-to-physical space registration in image-guided liver surgery (IGLS) is critical to provide reliable guidance information and pertinent surface displacement data for use in deformation correction algorithms. The current protocol used to perform the image-to-physical space registration involves an initial pose estimation provided by a point based registration of anatomical landmarks identifiable in both the preoperative tomograms and the intraoperative presentation. The surface based registration is then performed via a traditional iterative closest point algorithm between the preoperative liver surface, segmented from the tomographic image set, and an intra-operatively acquired point cloud of the liver surface provided by a laser range scanner. Using the aforementioned method, the registration accuracy in IGLS can be compromised by poor initial pose estimation as well as tissue deformation due to the liver mobilization and packing procedure performed prior to tumor resection. In order to increase the robustness of the current surface-based registration method used in IGLS, we propose the incorporation of salient anatomical features, identifiable in both the preoperative image sets and intra-operative liver surface data, to aid in the initial pose estimation and play a more significant role in the surface based registration via a novel weighting scheme. The proposed surface registration method will be compared with the traditional technique using both phantom and clinically acquired data. Additionally, robustness studies will be performed to demonstrate the ability of the proposed method to converge to reasonable solutions even under conditions of large deformation and poor initial alignment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lung cancer screening for early diagnosis is a clinically important problem. One screening method is to test tissue samples obtained from CT-fluoroscopy (CTF) guided lung biopsy. CTF provides real-time imaging; however on most machines the view is limited to a single slice. Mentally reconstructing the direction of the needle when it is not in the imaging plane is a difficult task. We are currently developing 3D visualization software that will augment the physician's ability to perform this task. At the beginning of the procedure a CT scan is acquired at breath-hold. The physician then specifies an entry point and a target point on the CT. As the procedure advances the physician acquires a CTF image at breath-hold; the system then registers the current setup to the CT scan. To assess the performance of different registration algorithms for CTF/CT registration we propose to use simulated CTF images. These images are created by deforming the original CT volume and extracting a slice from it. Realistic deformation of the CT volume is achieved by using positional information from electromagnetically tracked fiducials, acquired throughout the respiratory cycle. To estimate the dense displacement field underlying the sparse displacement field provided by the fiducials we use radial basis function interpolation. Finally, we evaluated Thirion's "demons" algorithm, as implemented in ITK, for the task of slice-to-volume registration. We found it to be unsuitable for this task, as in most cases the recovered displacements were less than 50% of the original ones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-slice computed tomography (MSCT) has developed strongly in the emerging field of cardiovascular imaging. The manual analysis of atherosclerotic plaques in coronary arteries is a very time consuming and labor intensive process and today only qualitative analysis is possible. In this paper we present a new shape-based segmentation and visualization technique for quantitative analysis of atherosclerotic plaques in coronary artery disease. The new technique takes into account several aspects of the vascular anatomy. It uses two surface representations, one for the contrast filled vessel lumen and also one for the vascular wall. The deviation between these two surfaces is defined as plaque volume. These surface representations can be edited by the user manually. With this kind of representation it is possible to calculate sub plaque volumes (such as: lipid rich core, fibrous tissue, calcified tissue) inside this suspicious area. Also a high quality 3D visualization, using Open Inventor is possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report an image segmentation and registration method for studying joint morphology and kinematics from in vivo MRI scans and its application to the analysis of ankle joint motion. Using an MR-compatible loading device, a foot was scanned in a single neutral and seven dynamic positions including maximal flexion, rotation and inversion/eversion. A segmentation method combining graph cuts and level sets was developed which allows a user to interactively delineate 14 bones in the neutral position volume in less than 30 minutes total, including less than 10 minutes of user interaction. In the subsequent registration step, a separate rigid body transformation for each bone is obtained by registering the neutral position dataset to each of the dynamic ones, which produces an accurate description of the motion between them. We have processed six datasets, including 3 normal and 3 pathological feet. For validation our results were compared with those obtained from 3DViewnix, a semi-automatic segmentation program, and achieved good agreement in volume overlap ratios (mean: 91.57%, standard deviation: 3.58%) for all bones. Our tool requires only 1/50 and 1/150 of the user interaction time required by 3DViewnix and NIH Image Plus, respectively, an improvement that has the potential to make joint motion analysis from MRI practical in research and clinical applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electromagnetic tracking systems are affected by the presence of metal or more general conductive objects. In this paper results of two protocols are presented, which will access the amount of distortions caused by certain type of metals. One of the main application areas of electromagnetic tracking systems is the medical field. Therefore this paper concentrates on types of metals, which are common in a medical environment, like typical tool and implant materials and OR table steel. Results are obtained and compared for the first generation of Aurora systems (Aurora 1), released in September 2003 and for the new Aurora system (Aurora 2), which was released in September 2005.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical tracking systems have been used for several years in image guided medical procedures. Vendors often state static accuracies of a single retro-reflective sphere or LED. Expensive coordinate measurement machines (CMM) are used to validate the positional accuracy over the specified working volume. Users are interested in the dynamic accuracy of their tools. The configuration of individual sensors into a unique tool, the calibration of the tool tip, and the motion of the tool contribute additional errors. Electromagnetic (EM) tracking systems are considered an enabling technology for many image guided procedures because they are not limited by line-of-sight restrictions, take minimum space in the operating room, and the sensors can be very small. It is often difficult to quantify the accuracy of EM trackers because they can be affected by field distortion from certain metal objects. Many high-accuracy measurement devices can affect the EM measurements being validated. EM Tracker accuracy tends to vary over the working volume and orientation of the sensors. We present several simple methods for estimating the dynamic accuracy of EM tracked tools. We discuss the characteristics of the EM Tracker used in the GE Healthcare family of surgical navigation systems. Results for other tracking systems are included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracking organ motion due to respiration is important to enable precise interventions in the regions of the abdomen and thorax. Respiratory induced motion in these regions may limit the accuracy of interventions which do not employ some type of tracking. One method of tracking organ motion is to use a predictive model based on external tracking that is correlated to internal motion. This approach depends on the accuracy of the model used for correlating the two motions. Ideally, one would track the internal motion directly. We are investigating the use of electromagnetically tracked fiducials to enable real-time tracking of internal organ motion. To investigate the in-vivo accuracy of this approach we propose to use stereo-fluoroscopy. In this paper we show that stereo-fluoroscopy is accurate enough to serve as a validation method, displaying sub-millimetric accuracy (maximal error of 0.66mm). We study the effect of the bi-plane fluoroscopes on the electromagnetic systems' accuracy, and show that placing the bi-plane fluoroscopes in a typical intra-operative setup has a negligible effect on the tracking accuracy (maximal error of 1.4mm). Finally, we compare the results of stereo-fluoroscopy tracking and electromagnetic tracking of needles in an animal study, showing a mean (std) difference of 1.4 (1.5)mm between modalities. These results show that stereo-fluoroscopy can be used in conjunction with electromagnetic tracking with minimal effect, and that the electromagnetic system is accurate enough for motion tracking of internal organs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Precise knowledge of the individual cardiac anatomy is essential for diagnosis and treatment of congenital heart disease. Complex malformations of the heart can best be comprehended not from images but from anatomic specimens. Physical models can be created from data using rapid prototyping techniques, e.g., lasersintering or 3D-printing. We have developed a system for obtaining data that show the relevant cardiac anatomy from high-resolution CT/MR images and are suitable for rapid prototyping. The challenge is to preserve all relevant details unaltered in the produced models. The main anatomical structures of interest are the four heart cavities (atria, ventricles), the valves and the septum separating the cavities, and the great vessels. These can be shown either by reproducing the morphology itself or by producing a model of the blood-pool, thus creating a negative of the morphology. Algorithmically the key issue is segmentation. Practically, possibilities allowing the cardiologist or cardiac surgeon to interactively check and correct the segmentation are even more important due to the complex, irregular anatomy and imaging artefacts. The paper presents the algorithmic and interactive processing steps implemented in the system, which is based on the open-source Medical Imaging Interaction Toolkit (MITK, www.mitk.org). It is shown how the principles used in MITK enable to assemble the system from modules (functionalities) developed independently from each other. The system allows to produce models of the heart (and other anatomic structures) of individual patients as well as to reproduce unique specimens from pathology collections for teaching purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the greatest challenges for a software engineer is to create a complex application that is comprehensive enough to be useful to a diverse set of users, yet focused enough for individual tasks to be carried out efficiently with minimal training. This "powerful yet simple" paradox is particularly prevalent in advanced medical imaging applications. Recent research in the Biomedical Imaging Resource (BIR) at Mayo Clinic has been directed toward development of an imaging application framework that provides powerful image visualization/analysis tools in an intuitive, easy-to-use interface. It is based on two concepts very familiar to physicians - Cases and Workflows. Each case is associated with a unique patient and a specific set of routine clinical tasks, or a workflow. Each workflow is comprised of an ordered set of general-purpose modules which can be re-used for each unique workflow. Clinicians help describe and design the workflows, and then are provided with an intuitive interface to both patient data and analysis tools. Since most of the individual steps are common to many different workflows, the use of general-purpose modules reduces development time and results in applications that are consistent, stable, and robust. While the development of individual modules may reflect years of research by imaging scientists, new customized workflows based on the new modules can be developed extremely fast. If a powerful, comprehensive application is difficult to learn and complicated to use, it will be unacceptable to most clinicians. Clinical image analysis tools must be intuitive and effective or they simply will not be used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
4D images (3 spatial dimensions plus time) using CT or MRI will play a key role in radiation medicine as techniques for respiratory motion compensation become more widely available. Advance knowledge of the motion of a tumor and its surrounding anatomy will allow the creation of highly conformal dose distributions in organs such as the lung, liver, and pancreas. However, many of the current investigations into 4D imaging rely on synchronizing the image acquisition with an external respiratory signal such as skin motion, tidal flow, or lung volume, which typically requires specialized hardware and modifications to the scanner. We propose a novel method for 4D image acquisition that does not require any specific gating equipment and is based solely on open source image registration algorithms. Specifically, we use the Insight Toolkit (ITK) to compute the normalized mutual information (NMI) between images taken at different times and use that value as an index of respiratory phase. This method has the advantages of (1) being able to be implemented without any hardware modification to the scanner, and (2) basing the respiratory phase on changes in internal anatomy rather than external signal. We have demonstrated the capabilities of this method with CT fluoroscopy data acquired from a swine model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new procedure for percutaneous screw insertion in the scaphoid is proposed. The procedure involves pre-surgery planning using computed tomography imaging and intra-operative guidance using three-dimensional ultrasound. Preoperatively, the desired screw location and orientation is chosen on a three-dimensional surface model generated from computed tomography images. During the surgery, ultrasound images are captured from the targeted anatomy of the patient using an ultrasound probe that is tracked with a Certus optical camera. The tracked probe enables the registration of the surface model and the surgical plan to the patient in the operating room. The surgical drill, used by the surgeon for screw insertion, is also tracked with the optical camera. A graphical user interface has been developed to display the surface model, the surgical plan and the drill in real-time. By means of this interface, the surgeon is guided during the screw insertion procedure. Our experiments on scaphoid phantoms demonstrate that the accuracy of the proposed procedure is potentially of the same order as an open reduction and screw fixation surgery. The advantages of this new procedure are a reduced risk of infections and minimal soft tissue damage due to its percutaneous nature. The procedure also reduces the exposure to ionizing radiation for patients and operating room staff due to the employment of ultrasound imaging instead of fluoroscopy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasound (US) guided prostate brachytherapy is a minimally invasive form of cancer treatment during which a needle is used to insert radioactive seeds into the prostate at pre-planned positions. Interaction with the needle can cause the prostate to deform and this can lead to inaccuracy in seed placement. Virtual reality (VR) simulation could provide a way for surgical residents to practice compensating for these deformations. To facilitate such a tool, we have developed a hybrid deformable model that combines ChainMail distance constraints with mass-spring physics to provide realistic, yet customizable deformations. Displacements generated by the model were used to warp a baseline US image to simulate an acquired US sequence. The algorithm was evaluated using a gelatin phantom with a Young's modulus approximately equal to that of the prostate (60 kPa). A 2D US movie was acquired while the phantom underwent needle insertion and inter-frame displacements were calculated using normalized cross correlation. The hybrid model was used to simulate the same needle insertion and the two sets of displacements were compared on a frame-by-frame basis. The average perpixel displacement error was 0.210 mm. A simulation rate of 100 frames per second was achieved using a 1000 element triangular mesh while warping a 300x400 pixel US image on an AMD Athlon 1.1 Ghz computer with 1 GB of RAM and an ATI Radeon 9800 Pro graphics card. These results show that this new deformable model can provide an accurate solution to the problem of simulating real-time prostate brachytherapy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a feasibility and evaluation study for using 2D ultrasound in conjunction with our statistical deformable bone model in the scope of computer-assisted surgery (CAS). The final aim is to provide the surgeon with an enhanced 3D visualization for surgical navigation in orthopaedic surgery without the need for preoperative CT or MRI scans. We unified our earlier work to combine several automatic methods for statistical bone shape prediction from a sparse set of surface points, and ultrasound segmentation and calibration to provide the intended rapid and accurate visualization. We compared the use of a tracked digitizing pointer to ultrasound to acquire landmarks and bone surface points for the estimation of two cast proximal femurs, where two users performed the experiments 5-6 times per scenario. The concept of CT-based error introduced in the paper is used to give an approximate quantitative value to the best hoped-for prediction error, or lower-bound error, for a given anatomy. The conclusions of this work were that the pointer-based approach produced good results, and although the ultrasound-based approach performed considerably worse on average, there were several cases where the results were comparable to the pointer-based approach. It was determined that the primary factor for poor ultrasound performance was the inaccurate localization of the three initial landmarks, which are used for the statistical shape model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To completely remove a tumor from a diseased kidney, while minimizing the resection of healthy tissue, the surgeon must be able to accurately determine its location, size and shape. Currently, the surgeon mentally estimates these parameters by examining pre-operative Computed Tomography (CT) images of the patient's anatomy. However, these images do not reflect the state of the abdomen or organ during surgery. Furthermore, these images can be difficult to place in proper clinical context. We propose using Ultrasound (US) to acquire images of the tumor and the surrounding tissues in real-time, then segmenting these US images to present the tumor as a three dimensional (3D) surface. Given the common use of laparoscopic procedures that inhibit the range of motion of the operator, we propose segmenting arbitrarily placed and oriented US slices individually using a tracked US probe. Given the known location and orientation of the US probe, we can assign 3D coordinates to the segmented slices and use them as input to a 3D surface reconstruction algorithm. We have implemented two approaches for 3D segmentation from freehand 2D ultrasound. Each approach was evaluated on a tissue-mimicking phantom of a kidney tumor. The performance of our approach was determined by measuring RMS surface error between the segmentation and the known gold standard and was found to be below 0.8 mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Uterine adenoma and uterine bleeding are the two most prevalent diseases in Chinese women. Many women lose their fertility from these diseases. Currently, a minimally invasive ablation system using an RF button electrode is being used in Chinese hospitals to destroy tumor cells or stop bleeding. In this paper, we report on a 3D US guidance system developed to avoid accidents or death of the patient by inaccurate localization of the tumor position during treatment. A 3D US imaging system using a rotational scanning approach of an abdominal probe was built. In order to reduce the distortion produced when the rotational axis is not collinear with the central beam of the probe, a new 3D reconstruction algorithm is used. Then, a fast 3D needle segmentation algorithm is used to find the electrode. Finally, the tip of electrode is determined along the segmented 3D needle and the whole electrode is displayed. Experiments with a water phantom demonstrated the feasibility of our approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Liquid crystal displays (LCDs) are fast gaining ground over the cathode ray tube (CRT) displays in the medical display market. High performing LCDs are considered to have comparable or better performance than CRTs in displaying static images. However, LCDs are inferior to CRTs in displaying moving scenes due to their slow response. The response time provided by display manufacturers is typically measured while switching the LCD from black to white and white to black. This is usually not the longest response time. In reality, the transition time between different gray scales can be many times longer. In this paper we report preliminary work on measuring the gray level response time of LCDs and simulating luminance errors caused by slow transition between some gray levels. We first characterized the measuring system using a fast light-emitting diode (LED) to explore the accuracy and noise-filtering capability of the system. A 256x256 matrix of response time between different gray levels was then measured. Nearly half of the gray level transitions are much longer than the frame time (16.67~ms) of LCD displays. The longest response time was above 100~ms. When driving a display between these gray levels, the targeted gray level can't be achieved until many frame times. To understand how the slow response may affect the display's ability to render the desired image values, we calculated the achieved luminance based on the measured matrix. The results simulate the visual effect of displaying a moving object on the LCD monitors, and providing a reference for determining LCD performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the key metrics that carry information about image quality of medical displays is resolution. Until now, this property has been quantitatively assessed in laboratory settings. For the first time, a device consisting of a CCD camera and analysis software has been made commercially available for measuring the resolution of medical displays in a clinical setting. This study aimed to evaluate this new product in terms of accuracy and precision. In particular, attention was paid to determine whether the device is appropriate for clinical use. This work involved the measurement of the Modulation Transfer Function (MTF) of a medical Liquid Crystal Display (LCD) using the software/camera system. To check for accuracy, the results were compared with published values of the resolution for the same display. To assess the system's precision, measurements were made multiple times at the same setting. The performance of the system was also ascertained as a function of the focus setting of the camera. In terms of repeatability, the results indicate that when the camera is focused within ± 0.64 mm of the optimum focus setting, the MTF values lie within approximately 14% of the best focus MTF at the Nyquist frequency and 11% of the optimum total sharpness (∫MTF df). Similar results were obtained in the horizontal and vertical directions. Also, the MTF results track with luminance values as expected. In terms of accuracy, the device provides MTF figures within 10% to 20% of the previously measured values.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cathode Ray Tube (CRT) displays and Active-Matrix Liquid Crystal Displays (AM-LCD) are currently the dominant softcopy displays in the radiology reading rooms. Results of some studies have shown the superiority of LCDs over CRTs in many aspects. In terms of the contrast resolution, however, they are similar and both of them can usually interpret 8-bit images. We have reported an error-diffusion-based method to compensate for the limited contrast resolution last year on this meeting. In this paper, we propose to include the image content into the error diffusion kernel to further increase the contrast of the image displayed. First, the raw image is processed to find the image contrast information. The areas in the image which have low contrast are enhanced more than the areas with high contrast. The areas where image noise is dominant are not enhanced. The enhancement operation will modify the image and the changes are treated as error and inputted to the error diffusion kernel. We have used a different set of diffusion weights for image contrast enhancement. But the two diffusion operations are done together.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The stabilisation of motion on the beating heart is investigated in the context of minimally invasive robotic surgery. Although reduced by mechanical stabilisers, residual tissue motion makes safe surgery still difficult and time consuming. Compensation for this movement is therefore highly desirable. Motion can be captured by tracking natural landmarks on the heart surface recorded by a video endoscope. Stabilisation is achieved by transforming the images using a motion field calculated from captured local motion. Since the surface of the beating heart is distorted nonlinearly, compensating the occurring motion with a constant image correction factor is not sufficient. Therefore, heart motion is captured by several landmarks, the motion between which is interpolated such that locally appropriate motion correction values are obtained. To estimate the motion between the landmark positions, a triangulation is built and motion information in each triangle is approximated by linear interpolation. Motion compensation is evaluated by calculating the optical flow remaining in the stabilised images. The proposed linear interpolation model is able to reduce motion significantly and can also be implemented efficiently to stabilise images of the beating heart in realtime.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development of accurate material models and computational methods are two fundamental components in building a real-time realistic surgery simulator. In this paper, we use a least-squares method to calibrate an exponential model of pig liver based on the assumption of incompressible material under a uniaxial testing mode. With the obtained parameters, the stress-strain curves generated from the least-squares approach are compared to those from the corresponding model built in ABAQUS and to experimental data, resulting in mean deviations of 1.9% and 4.8%, respectively. Furthermore we demonstrate equivalence between the parameters of the exponential material model and those of linear or other nonlinear models under small strains. Finally, we incorporate this calibrated exponential model into a nonlinear finite element framework to simulate the behavior of liver during an interventional procedure, and achieve real-time performance through use of an interpolation approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual surgery simulation plays an increasingly important role as a planning aid for the surgeon. A reliable simulation method to predict the surgical outcome of breast reconstruction and breast augmentation procedures does not yet exist. However, a method to pre-operatively assess the result of the procedure would be useful to ensure a symmetrical and naturally looking result, and could be a practical means of communication with the patient. In this paper, we present a basic framework to simulate a subglandular breast implantation. First, we propose a method to build a model of the patient's anatomy, based on a 3D picture of the skin surface in combination with thickness estimates of the soft tissue surrounding the breast. This approach is cheap, fast and the picture can be taken while the patient is standing upright, which makes it advantageous compared to conventional CTor MR-based methods. Second, a set of boundary conditions is defined to mimic the effect of the implant. Finally, we compute the new equilibrium geometry using the iterative FEM-based Mass Tensor Method, which is computationally more effcient than the traditional FEM approach since sufficient precision can be achieved with a limited number of iterations. We illustrate our approach with a preliminary validation study on 4 patients. We obtain promising results with a mean error between the simulated and the true post-operative breast geometry below 4 mm and maximal error below 10 mm, which is found to be sufficiently accurate for visual assessment in clinical practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Constructing anatomical shape from sparse information is a challenging task. A priori information is often required to handle this otherwise ill-posed problem. In this paper, the problem is formulated as a three-stage optimal estimation process using an a priori dense surface point distribution model (DS-PDM). The dense surface point distribution model itself is constructed from an already-aligned training shape set using Loop subdivision. It provides a dense and smooth description of all a priori training shapes. Its application in anatomical shape reconstruction facilitates all three stages as follows. The first stage, registration, is to iteratively estimate the scale and the 6-dimensional (6D) rigid registration transformation between the mean shape of DS-PDM and the input points using the iterative closest point (ICP) algorithm. Due to the dense description of the mean shape, a simple point-to-point distance is used to speed up the searching for closest point pairs. The second stage, morphing, optimally and robustly estimates a dense patient-specific template surface from DS-PDM using Mahalanobis distance based regularization. The estimated dense patient-specific template surface is then fed to the third stage, deformation, which uses a newly formularized kernel-based regularization to further reduce the reconstruction error. The proposed method is especially useful for accurate and stable surface reconstruction from sparse information when only a small number of a priori training shapes are available. It has been successfully tested on anatomical shape reconstruction of femoral heads using only dozens of sparse points, yielding very promising results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel approach is presented which combines rotational X-ray imaging, real-time fluoroscopic X-ray imaging and real-time catheter tracking for improved guidance in interventional electrophysiology procedures. Rotational X-ray data and real-time fluoroscopy data obtained from a Philips FD10 flat detector X-ray system and are registered with real-time localization data from catheter tracking equipment. The visualization and registration of rotational X-ray data with catheter location data enables the physician to better appreciate the underlying anatomy of interest in three dimensions and to navigate the interventional or mapping device more effectively. Furthermore, the fused information streams from rotational X-ray, real-time X-ray fluoroscopy and real-time three-dimensional catheter locations offer a direct imaging feedback during interventions, facilitating navigation and potentially improving clinical outcome. With the technique one is able to reduce the fluoroscopic time required in a procedure, since the catheter is registered and visualized with off-line projection data from various view angles. We show a demonstrator which integrates, registers, and visualizes the various data streams. It can be implemented in the clinical work-flow with reasonable effort. Results are presented based on an experimental setup. Furthermore, the robustness and the accuracy of this technique have been determined based on phantom studies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The utility of X-ray fused with MRI (XFM) using external fiducial markers to perform targeted endomyocardial injections in infarcted hearts of swine was tested. Endomyocardial injections of feridex-labeled mesenchymal stromal cells (Fe-MSC) were performed in the previously infarcted hearts of 12 Yucatan miniswine (33-67 kg). Animals had pre-injection cardiac MRI, XFM-guided endomyocardial injection of Fe-MSC suspension spiked with tissue dye, and post-injection MRI. 24 hours later, after euthanasia, the hearts were excised, sliced and stained with TTC. During the injection procedure, operators were provided with 3D surfaces of endocardium, epicardium, myocardial wall thickness and infarct registered with live XF images to facilitate device navigation and choice of injection location. 130 injections were performed in hearts where diastolic wall thickness ranged from 2.6 to 17.7 mm. Visual inspection of the pattern of dye staining on TTC stained heart slices correlated (r=0.98) with XFM-derived injection locations mapped onto delayed hyperenhancement MRI and the susceptibility artifacts seen on the post-injection T2*-weighted gradient echo MRI. The in vivo target registration error was 3.17±2.61 mm (n=64) and 75% of injections were within 4 mm of the predicted location. 3D to 2D registration of XF and MR images using external fiducial markers enables accurate targeted endomyocardial injection in a swine model of myocardial infarction. The present data suggest that the safety and efficacy of this approach for performing targeted endomyocardial delivery should be evaluated further clinically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper describes a computer-aided navigation system using image fusion to support endoscopic intervention like accurate collection of biopsy specimen. In particular, an endoscope which provides the physician with real time ultrasound (US) and a video image, is equipped with an electromagnetic tracking sensor. An image slice that corresponds to the actual image of the US scan head is derived from a preoperative computed tomography (CT) volume data set by means of oblique reformatting. Both views are displayed side by side. The position of the image acquired by the US scanhead is determined by the miniatured electromagnetic tracking system (EMTS) after applying a calibration to the endoscope's scanhead. The relative orientation between the patient coordinate system and a preoperative dataset (such CT or magnetic resonance (MR) image) is derived from a 2D/3D registration. This was achieved by calibrating an interventional CT slice by means of an optical tracking system (OTS) using the same algorithm as for the US calibration. Then the interventional CT slice is used for a 2D/3D registration into the coordinate system of the preoperative CT. The fiducial registration error (FRE) for the US calibration amounted to 3.6 mm +/- 2.0 mm. For the interventional CT we found a FRE of 0.36 +/- 0.12 mm. The error for the 2D/3D registration was 2.3 +/- 0.5 mm. The point-to-point registration between to OTS and the EMTS was accomplished with an FRE of 0.6 mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real-time 3D optical tracking of free-hand imaging devices or surgical tools has been studied and employed for object localization in many minimally invasive interventions. However, the surgical workspace for many interventional procedures is often sub-dermal with tool access through ports from surgical incisions or anatomical orifices. To maintain the optical line-of-sight criterion, external extensions of inserted imaging devices and rigid surgical tools must be tracked to localize the internal tool tips. Unfortunately, tracking by this form of correspondence is very susceptible to noise as orientation errors on the external tracked end compound into both rotational and translational errors on the internal, workspace position. These translational errors are proportional to the length of the probe and the sine of the angulation error, so small angulation errors can quickly compromise the accuracy of the tool tip localization. We propose a real-time tracking correction technique that uses the rotational fulcrum created by the device entry port to minimize the effect of translational and rotational noise errors for tool tip localization. Our technique could apply to many types of interventions, but we focus on the application to the prostate biopsy procedure for tracking a transrectal ultrasound (TRUS) probe commonly used for prostate biopsies. In vitro studies were performed using the Claron Technology MicronTracker 2 to track a TRUS probe in a fixed rotational device. Our experimental results showed an order of magnitude improvement in RMS localization of the internal TRUS probe tip using fulcrum correction over the raw tracking information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rapid registration of multimodal cardiac images can improve image-guided cardiac surgeries and cardiac disease diagnosis. While mutual information (MI) is arguably the most suitable registration technique, this method is too slow to converge for real time cardiac image registration; moreover, correct registration may not coincide with a global or even local maximum of MI. These limitations become quite evident when registering three-dimensional (3D) ultrasound (US) images and dynamic 3D magnetic resonance (MR) images of the beating heart. To overcome these issues, we present a registration method that uses a reduced number of voxels, while retaining adequate registration accuracy. Prior to registration we preprocess the images such that only the most representative anatomical features are depicted. By selecting samples from preprocessed images, our method dramatically speeds up the registration process, as well as ensuring correct registration. We validated this registration method for registering dynamic US and MR images of the beating heart of a volunteer. Experimental results on in vivo cardiac images demonstrate significant improvements in registration speed without compromising registration accuracy. A second validation study was performed registering US and computed tomography (CT) images of a rib cage phantom. Two similarity metrics, MI and normalized crosscorrelation (NCC) were used to register the image sets. Experimental results on the rib cage phantom indicate that our method can achieve adequate registration accuracy within 10% of the computation time of conventional registration methods. We believe this method has the potential to facilitate intra-operative image fusion for minimally invasive cardio-thoracic surgical navigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An extensive simulation study was performed to examine different point-to-surface registration techniques for intraoperative registration of preoperative patient data to points collected with electrophysiologic anatomy mapping systems. Three point-to-surface registration methods were evaluated using simulated points sampled from a preoperative heart model. Downhill Simplex (DS) based method outperformed the Iterative Closest Point (ICP) method and a chamfer transform based method. One hundred simulations were performed under a variety of noise and sampling conditions. Less than four pixels root mean squared distance (RMSD) error was observed when there was a 2-pixel standard deviation Gaussian noise in the point cloud coordinates. This registration error was mainly due to the added noise in the sampled points. A near optimal registration can be achieved when 50 or more points randomly sampled on the surface are used as point samples. Reasonable registration can be achieved when 25 points are used. A motion-compensating approach to registration was evaluated in order to account for the different transformation that each anatomical structure may undergo during the procedure due to respiratory motion and other factors. A piecewise registration method, which registers different anatomical structure independently, was evaluated, and favorable results were obtained as compared to a global registration approach. Further validation is in progress to evaluate the piecewise registration using realistic dynamic phantoms and in vivo animal studies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present an extensive quantitative validation on 3D facial soft tissue simulation for maxillofacial surgery planning. The study group contained 10 patients. In previous work we presented a new Mass Tensor Model to simulate the new facial appearance after maxillofacial surgery in a fast way. 10 patients were preoperatively CT-scanned and the surgical intervention was planned. 4 months after surgery, a post-operative control CT was acquired. In this study, the simulated facial outlook is compared with post-operative image data. After defining corresponding points between the predicted and actual post-operative facial skin surface, using a variant of the non-rigid TPS-RPM algorithm, distances between these correspondences are quantified and visualized in 3D. As shown, the average median distance measures only 0.60 mm and the average 90% percentile stays below 1.5 mm. We can conclude that our model clearly provides an accurate prediction of the real post-operative outcome and is therefore suitable for use in clinical practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose a principled approach for shape comparison. Given two surfaces, one to one correspondences are determined using the Laplace equation. The distance between corresponding points is then used to define both global and local dissimilarity statistics between the surfaces. This technique provides a powerful method to compare shapes both locally and globally for the purpose of segmentation, registration or shape analysis. For improved accuracy, we propose a Boundary Element Method. Our approach is applicable to datasets of any dimension and offers subpixel resolution. We illustrate the usefulness of the technique for validation of segmentation, by defining global dissimilarity statistics and visualizing errors locally on color-coded surfaces. We also show how our technique can be applied to multiple shapes comparison.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surgical planning in oncological liver surgery is based on the location of the 8 anatomical segments according to Couinaud's definition and tumors inside these structures. The detection of the boundaries between the segments is then the first step of the preoperative planning. The proposed method, devoted to binary images of livers segmented from CT-scans, has been designed to delineate these segments. It automatically detects a set of landmarks using a priori anatomical knowledge and differential geometry criteria. These landmarks are then used to position the Couinaud's segments. Validations performed on 7 clinical cases tend to prove that the method is reliable for most of these separation planes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Volume measurement plays an important role in many medical applications in which physicians need to quantify tumor growth over time. For example, tumor volume estimation can help physicians diagnose patients and evaluate the effects of therapy. These measurements can also help researchers compare segmentation methods. For researchers to quickly check the results of volume data processing, they need a graphical interface with volume visualization features. VolView is an interactive visualization environment which provides such an interface. The "plug-in" architecture of VolView allows it to be used as a visualization platform for evaluation of advanced image processing algorithms. In this work, we implemented VolView plug-ins for two volume measurement algorithms and three volume comparison algorithms. One volume measurement algorithm involves voxel counting and the other provides finer volume measurement by anti-aliasing the tumor volume. The three volume comparison methods are a maximum surface distance measure, mean absolute surface distance, and a volumetric overlap measure. In this implementation, we rely heavily on software components from the open source Insight Segmentation and Registration Toolkit (ITK). The paper also presents the use of the VolView environment to evaluate liver tumor segmentation based on level set techniques. The simultaneous truth and performance level estimation (STAPLE) method was used to evaluate the estimated ground truth from multiple radiologists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new method for modeling organ deformations due to successive resections. We use a biomechanical model of the organ, compute its volume-displacement solution based on the eXtended Finite Element Method (XFEM). The key feature of XFEM is that material discontinuities induced by every new resection can be handled
without remeshing or mesh adaptation, as would be required by the conventional Finite Element Method (FEM). We focus on the application of preoperative image updating for image-guided surgery. Proof-of-concept demonstrations are shown for synthetic and real data in the context of neurosurgery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Shift of brain tissues during surgical procedures affects the precision of image-guided neurosurgery (IGNS). To improve the accuracy of the alignment between the patient and images, finite element model-based non-rigid registration methods have been investigated. The best prior estimate (BPE), the forced displacement method (FDM), the weighted basis solutions (WBS), and the adjoint equations method (AEM) are versions of this approach that have appeared in the literature. In this paper, we present a quantitative comparison study on a set of three patient cases. Three-dimensional displacement data from the surface and subsurface was extracted using the intra-operative ultrasound (iUS) and intraoperative stereovision (iSV). These data are then used as the "ground truth" in a quantitative study to evaluate the accuracy of estimates produced by the finite element models. Different types of clinical cases are presented, including distension and combination of sagging and distension. In each case, a comparison of the performance is made with the four methods. The AEM method which recovered 26-62% of surface brain motion and 20-43% of the subsurface deformation, produced the best fit between the measured data and the model estimates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A major challenge in neurosurgery oncology is to achieve maximal tumor removal while avoiding postoperative neurological deficits. Therefore, estimation of the brain deformation during the image guided tumor resection process is necessary. While anatomic MRI is highly sensitive for intracranial pathology, its specificity is limited. Different pathologies may have a very similar appearance on anatomic MRI. Moreover, since fMRI and diffusion tensor imaging are not currently available during the surgery, non-rigid registration of preoperative MR with intra-operative MR is necessary. This article presents a translational research effort that aims to integrate a number of state-of-the-art technologies for MRI-guided neurosurgery at the Brigham and Women's Hospital (BWH). Our ultimate goal is to routinely provide the neurosurgeons with accurate information about brain deformation during the surgery. The current system is tested during the weekly neurosurgeries in the open magnet at the BWH. The preoperative data is processed, prior to the surgery, while both rigid and non-rigid registration algorithms are run in the vicinity of the operating room. The system is tested on 9 image datasets from 3 neurosurgery cases. A method based on edge detection is used to quantitatively validate the results. 95% Hausdorff distance between points of the edges is used to estimate the accuracy of the registration. Overall, the minimum error is 1.4 mm, the mean error 2.23 mm, and the maximum error 3.1 mm. The mean ratio between brain deformation estimation and rigid alignment is 2.07. It demonstrates that our results can be 2.07 times more precise then the current technology. The major contribution of the presented work is the rigid and non-rigid alignment of the pre-operative fMRI with intra-operative 0.5T MRI achieved during the neurosurgery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compensating for intraoperative brain shift using computational models has shown promising results. Since computational time is an important factor during neurosurgery, a priori knowledge of the possible sources of deformation can increase the accuracy of model-updated image-guided systems (MUIGS). In this paper, we use sparse intraoperative data acquired with the help of a laser-range scanner and introduce a strategy for integrating this information with the computational model. The model solutions are computed preoperatively and are combined with the help of a statistical model to predict the intraoperative brain shift. Validation of this approach is performed with measured intraoperative data. The results indicate our ability to predict intraoperative brain shift to an accuracy of 1.3mm ± 0.7mm. This method appears to be a promising technique for increasing the speed and accuracy of MUIGS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intra-operative quality assurance and dosimetry optimization in prostate brachytherapy critically depends on the ability of discerning the locations of implanted seeds. Various methods exist for seed matching and reconstruction from multiple segmented C-arm images. Unfortunately, using three or more images makes the problem NP-hard, i.e. no polynomial-time algorithm can provably compute the complete matching. Typically, a statistical analysis of performance is considered sufficient. Hence it is of utmost importance to exploit all the available information in order to minimize the matching and reconstruction errors. Current algorithms use only the information about seed centers, disregarding the information about the orientations and length of seeds. While the latter has little dosimetric impact, it can positively contribute to improving seed matching rate and 3D implant reconstruction accuracy. It can also become critical information when hidden and spuriously segmented seeds need to be matched, where reliable and generic methods are not yet available. Expecting orientation information to be useful in reconstructing large and dense implants, we have developed a method which incorporates seed orientation information into our previously proposed reconstruction algorithm (MARSHAL). Simulation study shows that under normal segmentation errors, when considering seed orientations, implants of 80 to 140 seeds with the density of 2.0- 3.0 seeds/cc give an average matching rate >97% using three-image matching. It is higher than the matching rate of about 96% when considering only seed positions. This means that the information of seed orientations appears to be a valuable additive to fluoroscopy-based brachytherapy implant reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently available seed reconstruction algorithms are based on the assumption that accurate information about the imaging geometry is known. The assumption is valid for isocentric x-ray units such as radiotherapy simulators. However, the large majority of the clinics performing prostate brachytherapy today use C-arms for which imaging parameters such as source to axis distance, image acquisition angles, central axis of the image are not accurately known. We propose a seed reconstruction algorithm that requires no such knowledge of geometry. The new algorithm makes use of perspective projection matrix, which can be easily derived from a set of known reference points. The perspective matrix calculates the transformation of a point in 3D space to the imaging coordinate system. An accurate representation of the imaging geometry can be derived from the generalized projection matrix (GPM) with eleven degrees of freedom. In this paper we show how GPM can be derived given a theoretical minimum number of reference points. We propose an algorithm to compute the line equation that defines the backprojection operation given the GPM. The algorithm can be extended to any ray-tracing based seed reconstruction algorithms. Reconstruction using the GPM does not require calibration of C-arms and the images can be acquired at arbitrary angles. The reconstruction is performed in near real-time. Our simulations show that reconstruction using GPM is robust and accuracy is independent of the source to detector distance and location of the reference points used to generate the GPM. Seed reconstruction from C-arm images acquired at unknown geometry provides a useful tool for intra-operative dosimetry in prostate brachytherapy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Introduction: Rapid prototype modeling (RPM) has been used in medicine principally for bones - that are easily extracted from CT data sets - for planning orthopaedic, plastic or maxillo-facial interventions, and/or for designing custom prostheses and implants. Based on newly available technology, highly valuable multimodality approaches can now be applied to RPM, particularly for complex musculo-skeletal (MSK) tumors where multimodality often transcends CT alone. Methods: CT data sets are acquired for primary evaluation of MSK tumors in parallel with other modalities (e.g., MR, PET, SPECT). In our approach, CT is first segmented to provide bony anatomy for RPM and all other data sets are then registered to the CT reference. Parametric information relevant to the tumor's characterization is then extracted from the multimodality space and merged with the CT anatomy to produce a hybrid RPM-ready model. This model - that also accommodates digital multimodality visualization - is then produced on the latest generation of 3D printers, which permits both shapes and colors. Results: Multimodality models of complex MSK tumors have been physically produced on modern RPM equipment. This new approach has been found to be a clear improvement over the previously disconnected physical RPM and digital multimodality visualization. Conclusions: New technical developments keep opening doors to sophisticated medical applications that can directly impact the quality of patient care. Although this early work still deals with bones as base models for RPM, its use to encompass soft tissues is already envisioned for future approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new 3D ultrasound-based patient positioning system for target localisation during radiotherapy is described. Our system incorporates the use of tracked 3D ultrasound scans of the target anatomy acquired using a dedicated 3D ultrasound probe during both the simulation and treatment sessions, fully automatic 3D ultrasound-toultrasound registration, and OPTOTRAK IRLEDs for registering simulation CT to ultrasound data. The accuracy of the entire radiotherapy treatment process resulting from the use of our system, from simulation to the delivery of radiation, has been validated on a phantom. The overall positioning error is less than 5mm, which includes errors from estimation of the irradiated region location in the phantom.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lung biopsy is a common interventional radiology procedure. One of the difficulties in performing the lung biopsy is that lesions move with respiration. This paper presents a new robotically assisted lung biopsy system for CT fluoroscopy that can automatically compensate for the respiratory motion during the intervention. The system consists of a needle placement robot to hold the needle on the CT scan plane, a radiolucent Z-frame for registration of the CT and robot coordinate systems, and a frame grabber to obtain the CT fluoroscopy image in real-time. The CT fluoroscopy images are used to noninvasively track the motion of a pulmonary lesion in real-time. The position of the lesion in the images is automatically determined by the image processing software and the motion of the robot is controlled to compensate for the lesion motion. The system was validated under CT fluoroscopy using a respiratory motion simulator. A swine study was also done to show the feasibility of the technique in a respiring animal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tissue engineering attempts to address the ever widening gap between the demand and supply of organ and tissue transplants using natural and biomimetic scaffolds. The current scaffold fabrication techniques can be broadly classified into (a) conventional, irreproducible, stochastic techniques producing biomorphic "secundam naturam" but sub optimal scaffold architecture and (b) rapidly emerging, repeatable, computer-controlled Solid Freeform Fabrication (SFF) producing, "contra naturam" scaffold architecture. This paper presents an image-based scaffold optimization strategy based on microCT images of the conventional scaffolds. This approach, attempted and perfected for the first time, synergistically exploits the orthogonal techniques to create repeatable, biomorphic scaffolds with optimal scaffold geometry. The ramifications of this image based computer assisted intervention to improve the status quo of scaffold fabrication might contribute to the previously elusive deployment of promising benchside tissue analogs to the clinical bedside.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a virtual image chain for medical display (project VICTOR: granted in the 5th framework program by European commission). The chain starts from raw data of an image digitizer (CR, DR) or synthetic patterns and covers image enhancement (MUSICA by Agfa) and both display possibilities, hardcopy (film on viewing box) and softcopy (monitor). Key feature of the chain is a complete image wise approach. A first prototype is implemented in an object-oriented software platform. The display chain consists of several modules. Raw images are either taken from scanners (CR-DR) or from a pattern generator, in which characteristics of DR- CR systems are introduced by their MTF and their dose-dependent Poisson noise. The image undergoes image enhancement and comes to display. For soft display, color and monochrome monitors are used in the simulation. The image is down-sampled. The non-linear response of a color monitor is taken into account by the GOG or S-curve model, whereas the Standard Gray-Scale-Display-Function (DICOM) is used for monochrome display. The MTF of the monitor is applied on the image in intensity levels. For hardcopy display, the combination of film, printer, lightbox and viewing condition is modeled. The image is up-sampled and the DICOM-GSDF or a Kanamori Look-Up-Table is applied. An anisotropic model for the MTF of the printer is applied on the image in intensity levels. The density-dependent color (XYZ) of the hardcopy film is introduced by Look-Up-tables. Finally a Human Visual System Model is applied to the intensity images (XYZ in terms of cd/m2) in order to eliminate nonvisible differences. Comparison leads to visible differences, which are quantified by higher order image quality metrics. A specific image viewer is used for the visualization of the intensity image and the visual difference maps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We measured modulation transfer functions (MTFs) of liquid crystal displays (LCDs) by rectangular waveform analysis. This method is taking a picture of the bar pattern on the monitor surface with a digital camera, and analyzing the picture with a personal computer. The monitors used are the monochrome LCDs of 1M(about 1 million of the number of pixels), 2M, 3M, and 5M, and the color LCDs of 1M, 2M, 3M. The display of 2M used IPS system and VA system. 3M and 5M of the monochrome LCDs were examined when there was a protective filter or not. Two or three displays are used for each system. In both the monochrome and the color LCDs, MTFs became high as the matrix size increased. In the monochrome LCDs, MTF in horizontal direction was higher than MTF in vertical direction. And there was no difference when a protective filter was used or not. MTFs of the color LCDs had little difference in horizontal direction and the vertical direction. MTFs of the LCDs are influenced on the form and the fill factor of a pixel, and composition of sub-pixels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image guided implantology using navigation systems is more accurate than manual dental implant insertion. The underlying image data are usually derived from computer tomography. The suitability of MR imaging for dental implant planning is a marginal issue so far. MRI data from cadaver heads were acquired using various MRI sequences. The data were assessed for the quality of anatomical imaging, geometric accuracy and susceptibility to dental metal artefacts. For dental implant planning, 3D models of the jaws were created. A software system for segmentation of the mandible and maxilla MRI data was implemented using c++, mitk, and qt. With the VIBE_15 sequence, image data with high geometric accuracy were acquired. Dental metal artefacts were lower than in CT data of the same heads. The segmentation of the jaws was feasible, in contrast to the segmentation of the dentition, since there is a lack of contrast to the intraoral soft tissue structures. MRI is a suitable method for imaging of the region of mouth and jaws. The geometric accuracy is excellent and the susceptibility to artefacts is low. However, there are yet two limitations: Firstly, the imaging of the dentition needs further improvement to allow accurate segmentation of these regions. Secondly, the sequence used in this study takes several minutes and hence is susceptible to motion artefacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electromagnetic trackers have found inroads into medical applications as a tool for navigation in recent years. Their susceptibility to interference from both electromagnetic and ferromagnetic sources have prompted several accuracy assessment studies in past years. To the best of our knowledge, this is the first accuracy study conducted to characterize measurement accuracy of an NDI AURORA electromagnetic tracker within a CyberKnife radiosurgery suite. CyberKnife is a frameless, stereotactic radiosurgery device used to ablate tumors within the brain, spine and in recent years, the chest and abdomen. This paper uses a data collection protocol to collect uniformly distributed data points within a subset of the AURORA measurement volume in a CyberKnife suite. The key aim of the study is to determine the extent to which large metal components of the CyberKnife stereotactic radiosurgery device and robot mount contribute to overall system performance for the AURORA electromagnetic device. A secondary goal of the work is to determine the variation in accuracy and device behavior with the presence of ionizing radiation when the LINAC is turned on.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image-guided, computer-assisted neurosurgery has emerged to improve localization and targeting, to provide a better anatomic definition of the surgical field, and to decrease invasiveness. Usually, in image-guided surgery, a computer displays the surgical field in a CT/MR environment, using axial, coronal or sagittal views, or even a 3D representation of the patient. Such a system forces the surgeon to look away from the surgical scene to the computer screen. Moreover, this kind of information, being pre-operative imaging, can not be modified during the operation, so it remains valid for guidance in the first stage of the surgical procedure, and mainly for rigid structures like bones. In order to solve the two constraints mentioned before, we are developing an ultrasoundguided surgical microscope. Such a system takes the advantage that surgical microscopy and ultrasound systems are already used in neurosurgery, so it does not add more complexity to the surgical procedure. We have integrated an optical tracking device in the microscope and an augmented reality overlay system with which we avoid the need to look away from the scene, providing correctly aligned surgical images with sub-millimeter accuracy. In addition to the standard CT and 3D views, we are able to track an ultrasound probe, and using a previous calibration and registration of the imaging, the image obtained is correctly projected to the overlay system, so the surgeon can always localize the target and verify the effects of the intervention. Several tests of the system have been already performed to evaluate the accuracy, and clinical experiments are currently in progress in order to validate the clinical usefulness of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vein localization and catheter insertion constitute the first and perhaps most important phase of many medical procedures. Currently, catheterization is performed manually by trained personnel. This process can prove problematic, however, depending upon various physiological factors of the patient. We present in this paper initial work for localizing surface veins via near-infrared (NIR) imaging and structured light ranging. The eventual goal of the system is to serve as the guidance for a fully automatic (i.e., robotic) catheterization device. Our proposed system is based upon near-infrared (NIR) imaging, which has previously been shown effective in enhancing the visibility of surface veins. We locate the vein regions in the 2D NIR images using standard image processing techniques. We employ a NIR line-generating LED module to implement structured light ranging and construct a 3D topographic map of the arm surface. The located veins are mapped to the arm surface to provide a camera-registered representation of the arm and veins. We describe the techniques in detail and provide example imagery and 3D surface renderings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently, the removal of kidney tumor masses uses only direct or laparoscopic visualizations, resulting in prolonged procedure and recovery times and reduced clear margin. Applying current image guided surgery (IGS) techniques, as those used in liver cases, to kidney resections (nephrectomies) presents a number of complications. Most notably is the limited field of view of the intraoperative kidney surface, which constrains the ability to obtain a surface delineation that is geometrically descriptive enough to drive a surface-based registration. Two different phantom orientations were used to model the laparoscopic and traditional partial nephrectomy views. For the laparoscopic view, fiducial point sets were compiled from a CT image volume using anatomical features such as the renal artery and vein. For the traditional view, markers attached to the phantom set-up were used for fiducials and targets. The fiducial points were used to perform a point-based registration, which then served as a guide for the surface-based registration. Laser range scanner (LRS) obtained surfaces were registered to each phantom surface using a rigid iterative closest point algorithm. Subsets of each phantom's LRS surface were used in a robustness test to determine the predictability of their registrations to transform the entire surface. Results from both orientations suggest that about half of the kidney's surface needs to be obtained intraoperatively for accurate registrations between the image surface and the LRS surface, suggesting the obtained kidney surfaces were geometrically descriptive enough to perform accurate registrations. This preliminary work paves the way for further development of kidney IGS systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a computerized fluoroscopy with zero-dose image updates for femoral diaphyseal fracture reduction is proposed. It is achieved with a two-step procedure. Starting from a few (normally 2) calibrated fluoroscopic image, the first step, data preparation, automatically estimates the size and the pose of the diaphyseal fragments through three-dimensional morphable object fitting using a parametric cylinder model. The projection boundary of each estimated cylinder, a quadrilateral, is then fed to a region information based active contour model to extract the fragment contours from the input fluoroscopic images. After that, each point on the contour is interpolated relative to the four vertices of the corresponding quadrilateral, which resulted in four interpolation coefficients per point. The second step, image updates, repositions the fragment projection on each acquired image during bony manipulation using a computerized method. It starts with interpolation of the new position of each point on the fragment contour using the interpolation coefficients calculated in the first step and the new position of the corresponding quadrilateral. The position of the quadrilateral is updated in real time according to the positional changes of the associated bone fragments, as determined by the navigation system during fracture reduction. The newly calculated image coordinates of the fragment contour are then fed to a OpenGL® based texture warping pipeline to achieve a real-time image updates. The presented method provides a realistic augmented reality for the surgeon. Its application may result in great reduction of the X-ray radiation to the patient and to the surgical team.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The real time recovery of the projection geometry is a fundamental issue in interventional navigation applications (e.g. guide wire reconstruction, medical augmented reality). In most works, the intrinsic parameters are supposed to be constant and the extrinsic parameters (C-arm motion) are deduced either from the orientation sensors of the C-arm or from other additional sensors (eg. optical and/or electro-magnetic sensors). However, due to the weight of the X-ray tube and the C-arm, the system is undergoing deformations which induce variations of the intrinsic parameters as a function of the C-arm orientation. In our approach, we propose to measure the effects of the mechanical deformations onto the intrinsic parameters in a calibration procedure. Robust calibration methods exist (the gold standard is the multi-image calibration) but they are time consuming and too tedious to set up in a clinical context. For these reasons, we developed an original and easy to use method, based on a planar calibration target, which aims at measuring with a high level of accuracy the variation of the intrinsic parameters on a vascular C-arm. The precision of the planar-based method was evaluated by the mean of error propagation using techniques described in.8 It appeared that the precision of the intrinsic parameters are comparable to the one obtained from the multi-image calibration method. The planar-based method was also successfully used to assess to behavior of the C-arm with respect to the C-arm orientations. Results showed a clear variation of the principal point when the LAO/RAO orientation was changed. In contrast, the intrinsic parameters do not change during a cranio-caudal C-arm motion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Supporting surgeons in performing minimally invasive surgeries can be considered as one of the major goals of computer assisted surgery. Excellent intraoperative visualization is a prerequisite to achieve this aim. The Siremobil Iso-C3D has become a widely used imaging device, which, in combination with a navigation system, enables the surgeon to directly navigate within the acquired 3D image volume without any extra registration steps. However, the image quality is rather low compared to a CT scan and the volume size (approx. 12 cm3) limits its application. A regularly used alternative in computer assisted orthopedic surgery is to use of a preoperatively acquired CT scan to visualize the operating field. But, the additional registration step, necessary in order to use CT stacks for navigation is quite invasive. Therefore the objective of this work is to develop a noninvasive registration technique. In this article a solution is being proposed that registers a preoperatively acquired CT scan to the intraoperatively acquired Iso-C3D image volume, thereby registering the CT to the tracked anatomy. The procedure aligns both image volumes by maximizing the mutual information, an algorithm that has already been applied to similar registration problems and demonstrated good results. Furthermore the accuracy of such a registration method was investigated in a clinical setup, integrating a navigated Iso-C3D in combination with an tracking system. Initial tests based on cadaveric animal bone resulted in an accuracy ranging from 0.63mm to 1.55mm mean error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Open source software has tremendous potential for improving the productivity of research labs and enabling the development of new medical applications. The Image-Guided Surgery Toolkit (IGSTK) is an open source software toolkit based on ITK, VTK, and FLTK, and uses the cross-platform tools CMAKE and DART to support common operating systems such as Linux, Windows, and MacOS. IGSTK integrates the basic components needed in surgical guidance applications and provides a common platform for fast prototyping and development of robust image-guided applications. This paper gives an overview of the IGSTK framework and current status of development followed by an example needle biopsy application to demonstrate how to develop an image-guided application using this toolkit.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Decubitus ulcers can have a deleterious effect on the quality of life for some patients, particularly those prone to chronic development of skin ulcerations. The bones of the pelvis are particularly relevant as nearly half of all ulcerations observed in the hospital are in the pelvic region. This research focuses on the development of methods to extract the ischium and adjacent anatomy from volumetric CT data of the pelvis which will be used for patient-specific modeling of high-pressure regions and the treatment of associated ulcers. Six volumetric CT scans were evaluated to determine the size and shape of the ischial tuberosities. Using oblique images computed from the CT data, cross-sectional measurements (approximately Superior-Inferior, Anterior-Posterior, and Left-Right) were made to estimate the size of the ischial tuberosities. Similar measurements were made on the ischial ramus. The mean length of the ischial tuberosities (S-I direction) is 12.35 cm. The mean dimension in the L-R and A-P directions are 2.97 cm and 3.78 cm, respectively. For the ischial ramus, the S-I, L-R, and A-P mean lengths are 6.57 cm, 1.72 cm, and 1.49 cm. Due to a limited field of view for the CT datasets, the thickness of the soft tissue (i.e. Gluteus Maximus and subcutaneous fat) could not be measured. Using the bony measurements and adjacent soft tissue measurements, an investigator would be able estimate the posterior pelvis forces for calculations of pressure on the proximal skin, which could then be used to predict ulcerations in patients, or to design new ulcer-inhibiting seating devices. Current efforts are focused on collecting a large cohort of data with both bony and soft tissue measurements. Future work will incorporate the physical properties of the soft tissue to specifically predict high-pressure regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The manual segmentation and analysis of high-resolution multislice cardiac CT datasets is both labor intensive and time consuming. Therefore it is necessary to supply the cardiologist with powerful software tools to segment the myocardium as well as the cardiac cavities and to compute the relevant diagnostic parameters. In this paper we present an automatic cardiac segmentation procedure with minimal user interaction. It is based on a combined bi-temporal statistical model of the left and right ventricle using the principal component analysis (PCA) as well as the independent component analysis (ICA) to model global and local shape variation. To train the model we used manually drawn end-diastolic as well as end-systolic contours of the right epi- and of the left and right endocardium to create triangular surfaces of training datasets. These surfaces were used to build a mean triangular surface model of the left and right ventricle for the end-diastolic and end-systolic heart phase and to compute the PCA and ICA decorrelation matrices which are used in a point distribution model (PDM) to model the global and local shape variations. In contrast to many previous attempts of model based cardiac segmentation we do not create separate models for the left and the right ventricle and for different heart phases, but instead create one single parameter vector containing the information of both ventricles and both heart phases. This enables us to use the correlation between the phases and between left and right side to create a model which is more robust and less sensitive e.g. to poor contrast at the right ventricle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
After introduction on a new multislice computed tomography (MSCT) scanner, it has become possible to produce highspeed CT angiography (CTA) that selected preferred method for imaging in emergent vascular conditions. On the other hand, the imaging of blood vessels is often referred to as magnetic resonance angiography (MRA). Both of angiography offers the good quality of three-dimensional information of the vessels. In this study, patient specific model were reconstructed using multi-slice computed tomography (CT) and magnetic resonance imaging (MRI). The optimal transit time from intravenous injection to enhancement cardiovascular system was determined using a contrast bolus tracking technique with CT examination and phase contrast magnetic resonance angiography (PC-MRA). The purpose of this study was to describe a novel blood flow visualization and analysis in the human cardiovascular system in more detail by constructing actual three-dimensional (3D) flow and simulated model using Computational flow dynamics (CFD) methods. CFD streamlines were displayed using a special illumination technique with blood pressure display, which gives a much better spatial understanding of the field's structure than ordinary constant-colored lines. Real vector display using PC-MRA was also expressed to compare with the CFD simulation. On conclusion, Patient specific approach using actual blood flow with PC-MRA and CFD were effective to estimate blood flow state of the cardiovascular system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Subdivision surfaces and parameterization are desirable for many algorithms that are commonly used in Medical Image Analysis. However, extracting an accurate surface and parameterization can be difficult for many anatomical objects of interest, due to noisy segmentations and the inherent variability of the object. The thin cartilages of the knee are an example of this, especially after damage is incurred from injuries or conditions like osteoarthritis. As a result, the cartilages can have different topologies or exist in multiple pieces. In this paper we present a topology preserving (genus 0) subdivision-based parametric deformable model that is used to extract the surfaces of the patella and tibial cartilages in the knee. These surfaces have minimal thickness in areas without cartilage. The algorithm inherently incorporates several desirable properties, including: shape based interpolation, sub-division remeshing and parameterization. To illustrate the usefulness of this approach, the surfaces and parameterizations of the patella cartilage are used to generate a 3D statistical shape model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A computational method is presented for optimizing needle placement in radiofrequency ablation treatment planning. The parameterized search is guided by an objective function that depends on transient, finite element solutions of coupled thermal and potential equations for each needle placement. A framework is introduced for solving the electrostatic equation by using boundary elements to model the needle as discrete current sources embedded within a finite element mesh. This method permits finite element solutions for multiple needle placements without remeshing. We demonstrate that the method produces a search space amenable to gradient-based optimization techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A significant amount of breast cancer research in recent years has been devoted to novel means of tumor detection such as MR contrast enhancement, electrical impedance tomography, microwave imaging, and elastography. Many of these detection methods involve deforming the breast. Often, these deformed images need to be correlated to anatomical images of the breast in a different configuration. In the case of our elastography framework, a series of comparisons between the pre- and post-deformed images needs to be performed. This paper presents an automatic method for determining correspondence between images of a pendant breast and a partially-constrained, compressed breast. The algorithm is an extension to the symmetric closest point approach of Papademetris et al. However, because of the unique deformation and shape change of a partially-constrained, compressed breast, the algorithm was modified through the use of iterative closest point (ICP) registration on easily identifiable sections of the breast images and through weighting the symmetric nearest neighbor correspondence. The algorithm presented in this paper significantly improves correspondence determination between the pre- and post-deformed images for a simulation when compared to the original Papademetris et al.'s symmetric closest point criteria.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we have developed a digital atlas of the pediatric human brain. Human brain atlases, used to visualize spatially complex structures of the brain, are indispensable tools in model-based segmentation and quantitative analysis of brain structures. However, adult brain atlases do not adequately represent the normal maturational patterns of the pediatric brain, and the use of an adult model in pediatric studies may introduce substantial bias. Therefore, we proposed to develop a digital atlas of the pediatric human brain in this study. The atlas was constructed from T1 weighted MR data set of a 9 year old, right-handed girl. Furthermore, we extracted and simplified boundary surfaces of 25 manually defined brain structures (cortical and subcortical) based on surface curvature. Higher curvature surfaces were simplified with more reference points; lower curvature surfaces, with fewer. We constructed a 3D triangular mesh model for each structure by triangulation of the structure's reference points. Kappa statistics (cortical, 0.97; subcortical, 0.91) indicated substantial similarities between the mesh-defined and the original volumes. Our brain atlas and structural mesh models (www.stjude.org/BrainAtlas) can be used to plan treatment, to conduct knowledge and modeldriven segmentation, and to analyze the shapes of brain structures in pediatric patients.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for detection, quantification, and visualization of brain shift in serial MR and CT images is presented. The method consists of three steps. It first establishes correspondence between a number of point landmarks in the images. It then uses the correspondences to determine a transformation function that warps one image to the geometry of the other. It finally uses the obtained transformation to create a vector flow that represents the local motion or deformation of one image with respect to the other. The method does not require the solution of a system of equations and, therefore, is especially effective when a large number of correspondences is needed to represent complex brain deformations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Localization and labeling of function regions in brain is an important topic in experimental brain sciences because the huge amount of data collected by neuroscientists will become meaningless if we cannot give them a precise description of their locations. In this paper, we proposed a localization and labelling method of 3D MR image of rat brain based on Paxinos-Watson atlas. Our objective is to use the specific atlas to accomplish localization and labeling of specified tissue of interest (TOI) to mimic a veteran expert such that invisible or unclear anatomic function regions in the MR images of rat brain can be automatically identified and marked. We proposed a multi-step method to locate and label the TOIs from the MR image of rat brain. Firstly, pre-processing. It aims at the digitization and 3D reconstruction of the atlas and MRI of rat brain. Secondly, two-step registration. The global registration is to eliminate the big misalign and section angle offset as well as the scale between the MRI and atlas. We can choose some unambiguous and characteristic points manually, and based on these correspondences a coarse registration is obtained using affine model. The local registration is to address individual variability of rat brain that can be performed by using Snake model. Thirdly, post-processing. The goal is to locate and label the TOIs in the selected MR image of rat brain slice guided by well-registered atlas. The experiments demonstrated the feasibility of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Localization of epileptogenic zones in extratemporal epilepsy is a challenging problem. We speculate that using all modalities of data in an optimal way can facilitate the localization of these zones. In this paper, we propose the following steps to transfer all modalities of data in a single reference coordinate system: 1) Segmentation of subdural and depth electrodes, and cortical surface. 2) Building 3D models of the segmented objects. 3) Registration of preoperative MRI and postoperative CT, and magnetoencephalography (MEG). The above steps result in fusion of all modalities of data, objects of interests (electrodes and cortical surface), MEG analysis results and brain mapping findings. This approach offers a means by which an accurate appreciation of the zone of epileptogenicity may be established through optimal visualization and further quantitative analyses of the fused data. It also provides a ground for validation of less expensive and noninvasive procedures, e.g., scalp EEG, MEG.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a robust surface registration using a Gaussian-weighted distance map (GWDM) for PET-CT brain fusion. Our method is composed of four steps. First, we segment the background of PET and CT brain images using 3D seeded region growing and apply inverse operation to the segmented images for getting head without holes. The non-head regions segmented with the head are then removed using the region growing-based labeling and the sharpening filter is applied to the segmented head in order to extract the feature points of the head from PET and CT images, respectively. Second, a GWDM is generated from feature points of CT images to lead the feature points extracted from PET images with large blurry and noisy conditions to robustly align at optimal location onto CT images. Third, similarity measure is evaluated repeatedly by weighted cross-correlation (WCC). In our experiments, we evaluate our method using software phantom and clinical datasets with the aspect of visual inspection, accuracy, robustness, and computational time. In our method, RMSE for translations and rotations are less than 0.1mm and 0.2o, respectively in software phantom dataset and give better accuracy than the conventional ones. In addition, our method gives a robust registration at optimal location regardless of increasing noise level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neurosurgical navigation systems using preoperative images have a problem in their accuracy caused by brain deformation during surgery. To address this problem the use of laser range scanner in order to obtain intraoperative cortical surface, is under study in our currently developing neurosurgical navigation system. This paper presents preliminary results of registration of intraoperatively acquired range and color images to preoperative MR images, within the context of image-guided surgery. We register images by performing two procedures: mapping of color image on the range image; and registration between color-mapped range images and preoperative medical images. The color image is mapped on the range image using camera calibration. Point-based rigid registration of preoperative images to the intraoperative images is performed through detection and matching of common fiducials in the images. Experimental results using intraoperatively acquired range images of cortical surface demonstrated the ability to perform registrations for MR images of the brain. In the future, we will focus on incorporating the above registration results into a biomechanical model of the brain to predict brain deformation during surgical procedures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a segmentation method of brain tissues from MR images, invented for our image-guided neurosurgery system under development. Our goal is to segment brain tissues for creating biomechanical model. The proposed segmentation method is based on 3-D region growing and outperforms conventional approaches by stepwise usage of intensity similarities between voxels in conjunction with edge information. Since the intensity and the edge information are complementary to each other in the region-based segmentation, we use them twice by performing a coarse-to-fine extraction. First, the edge information in an appropriate neighborhood of the voxel being considered is examined to constrain the region growing. The expanded region of the first extraction result is then used as the domain for the next processing. The intensity and the edge information of the current voxel only are utilized in the final extraction. Before segmentation, the intensity parameters of the brain tissues as well as partial volume effect are estimated by using expectation-maximization (EM) algorithm in order to provide an accurate data interpretation into the extraction. We tested the proposed method on T1-weighted MR images of brain and evaluated the segmentation effectiveness comparing the results with ground truths. Also, the generated meshes from the segmented brain volume by using mesh generating software are shown in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During an image-guided neurosurgery procedure, the neuronavigation system is subject to inaccuracy because of anatomical deformations which induce a gap between the preoperative images and their anatomical reality. Thus, the objective of many research teams is to succeed in quantifying these deformations in order to update preoperative images. Anatomical intraoperative deformations correspond to a complex spatio-temporal phenomenon. Our objective is to identify the parameters implicated in these deformations and to use these parameters as constrains for systems dedicated to updating preoperative images. In order to identify these parameters of deformation we followed the iterative methodology used for cognitive system conception: identification, conceptualization, formalization, implementation and validation. A state of the art about cortical deformations has been established in order to identify relevant parameters probably involved in the deformations. As a first step, 30 parameters have been identified and described following an ontological approach. They were formalized into a Unified Modeling Language (UML) class diagram. We implemented that model into a web-based application in order to fill a database. Two surgical cases have been studied at this moment. After having entered enough surgical cases for data mining purposes, we expect to identify the most relevant and influential parameters and to gain a better ability to understand the deformation phenomenon. This original approach is part of a global system aiming at quantifying and correcting anatomical deformations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is fundamentally important that all cancerous cells be adequately destroyed during Radio Frequency Ablation (RFA) procedures. To help achieve this goal, probe manufacturers advise physicians to increase the treatment region by one centimeter (1cm) in all directions around the diseased tissue. This enlarged treatment region provides a buffer to insure that cancer cells that migrated into surrounding tissue are adequately treated and necrose. Even though RFA is a minimally invasive, image-guided procedure, it is difficult for physicians to confidently follow the specified treatment protocol. In this paper we visually assess an RFA treatment by comparing a registered image set containing the untreated tumor, including the 1 cm safety boundary, to that of an image set containing the treated region acquired one month after surgery. For this study, we used Computerized Tomography images as both the tumor and treated regions are visible. To align the image sets of the abdomen, we investigate three different registration techniques; an affine transform that minimizes the correlation ratio, a point (or landmark) based 3D thin-plate spline approach, and a nonlinear B-spline elastic registration methodology. We found the affine registration technique simple and easy to use because it is fully automatic. Unfortunately, this method resulted in the largest visible discrepancy between the liver in the fused images. The thin-plate spline technique required the physician to identify corresponding landmarks in both image sets, but resulted in better visual accuracy in the fused images. Finally, the non-linear, B-spline, elastic registration technique used the registration results of the thin-plate spline method as a starting point and then required a significant amount of computation to determine its transformation, but also provided the most visually accurate fused image set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a fast 2D-3D marker-based registration technique to fuse anatomical structure of 3D CT scans onto 2D X-ray fluoroscopy image. Our method is composed of three stages. First, DRRs (Digitally Reconstructed Radiography) are generated by maximum intensity projection based on hardware texture-based volume rendering. This technique is over 200 times faster than software-based one. Second, confirmation markers are automatically segmented in DRRs and X-ray fluoroscopy images, respectively. Third, in/out-plane registration is proposed for real-time performance. In out-plane registration, we search for an optimal position of X-ray source in a 3D spherical coordinate system. Then we calculate optimal translation and rotation vectors by using principal axes method in in-plane registration. Our method has been successfully six different CT and X-ray fluoroscopy pairs generated from cardiac phantom datasets. For accuracy evaluation, we calculate root-mean-squared error (RMSE) between confirmation markers of DRRs and X-ray fluoroscopy images. Experimental results show that our DRRs generation method performs very fast and the hierarchical registration effectively finds the matching of DRRs and 2D images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Design of C-arm equipment with 3D imaging capabilitys involves retrieval of repeatable gantry positioning information along the acquisition trajectory. Inaccurate retrieval or improper use of positioning information may cause degradation of the reconstruction results, appearance of image artifacts, or indicate false structures. The geometry misrepresentation can also lead to the errors in relative pose assessment of anatomy-of-interest and interventional tools. Comprehensive C gantry calibration with an extended set of misalignment and motion parameters suffers from ambiguity caused by parameter cross-correlation and significant computational complexity. We deploy the concept of a waterfall calibration that comprises sequential intrinsic and extrinsic geometry calibration delineation steps. Following the image-based framework, the first step in our method is intrinsic calibration that deals with delineation of geometry of the X-ray tube-Detector assembly. Extrinsic parameters define motion of the C-arm assembly in 3D space and relate the Camera and World coordinate systems. We formulate both intrinsic and extrinsic calibration problems in vectorized form with total variation constraints. The proposed method has been verified by numerical design and validated by experimental studies. Sequential delineation of intrinsic and extrinsic geometries has demonstrated very efficient performance. The method eliminates the cross-correlation between cone-beam projection parameters, provides significantly better accuracy and computational speed, simplifies the structures of calibration targets used, and avoids the unnecessary workflow and image processing steps. It appears to be adequate for quality and cost derivations in an interventional surgery settings using a mobile C-arm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To detect lung cancer at an earlier stage, a promising method is to apply perfusion magnetic resonance imaging (pMRI) modified to assess tumor angiogenesis. One key issue is to effectively characterize angiogenic patterns of pulmonary nodules. Based on our previous study addressing this issue, in this work, we develop STAT, a Spatio-Temporal Analysis Tool that implements not only our previously proposed pulmonary nodule modeling framework but also a user friendly interface and many extended functions. Our goal is to make STAT an easy-to-use tool that can be applied to more general cases. STAT employs the following overall strategy for modeling pulmonary nodules: (1) nodule identification using a correlation maximization method, (2) nodule segmentation using edge detection, morphological operations and model-based strategy, and (3) nodule registration using landmark approach and thin-plate spline interpolation. In nodule identification, STAT provides new schemes for selecting the template and refining results in difficult cases. In nodule segmentation, STAT provides additional flexibilities for creating the weighting mask, selecting morphological structure elements and individually fixing segmentation result. In nodule registration, our previous study uses principal component analysis for landmark extraction, which may not work in general. To overcome this limitation, STAT provides an enhanced approach that minimizes the bending energy of the thin plate spline interpolation or mean square distance between each landmark set and the template set. Our main application of STAT is to define blood arrival patterns in the lung to identify tumor angiogenesis as a means of early accurate diagnosis of cancer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dynamic or 4D images (in which a section of the body is repeatedly imaged in order to capture physiological motion) are becoming increasingly important in medicine. These images are especially critical to the field of image-guided therapy, because they enable treatment planning that reflects the realistic motion of the therapy target. Although it is possible to acquire static images and deform them based on generalized assumptions of normal motion, such an approach does not account for variability in the individual patient. To enable the most effective treatments, it is necessary to be able to image each patient and characterize their unique respiratory motion, but software specifically designed around the needs of 4D imaging is not widely available. We have constructed an open source application that allows a user to manipulate and analyze 4D image data. This interface can load DICOM images into memory, reorder/rebin them if necessary, and then apply deformable registration methods to derive the respiratory motion. The interface allows output and display of the deformation field, display of images with the deformation field as an overlay, and tables and graphs of motion versus time. The registration is based on the open source Insight Toolkit (ITK) and the interface is constructed using the open source GUI tool FLTK, which will make it easy to distribute and extend this software in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a novel and fast way to perform 2D-3D registration between available intra-operative 2D images with pre-operative 3D images in order to provide better image-guidance. The current work is a feature based registration algorithm that allows the similarity to be evaluated in a very efficient and faster manner than that of intensity based approaches. The current approach is focused on solving the problem for neuro-interventional applications and therefore we use blood vessels, and specifically their centerlines as the features for registration. The blood vessels are segmented from the 3D datasets and their centerline is extracted using a sequential topological thinning algorithm. Segmentation of the 3D datasets is straightforward because of the injection of contrast agents. For the 2D image, segmentation of the blood vessel is performed by subtracting the image with no contrast (native) from the one with a contrast injection (fill). Following this we compute a modified version of the 2D distance transform. The modified distance transform is computed such that distance is zero on the centerline and increases as we move away from the centerline. This allows us a smooth metric that is minimal at the centerline and large as we move away from the vessel. This is a one time computation, and need not be reevaluated during the iterations. Also we simply sum over all the points rather than evaluating distances over all point pairs as would be done for similar Iterative Closest Point (ICP) based approaches. We estimate the three rotational and three translational parameters by minimizing this cost over all points in the 3D centerline. The speed improvement allows us to perform the registration in under a second on current workstations and therefore provides interactive registration for the interventionalist.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel ultrasound-guided computer system for arthroscopic surgery of the shoulder joint. Intraoperatively, the system tracks and displays the surgical instruments, such as arthroscope and arthroscopic burrs, relative to the anatomy of the patient. The purpose of this system is to improve the surgeon's perception of the three-dimensional space within the anatomy of the patient in which the instruments are manipulated and to provide guidance towards the targeted anatomy. Pre-operatively, computed tomography images of the patient are acquired to construct virtual threedimensional surface models of the shoulder bone structure. Intra-operatively, live ultrasound images of pre-selected regions of the shoulder are captured using an ultrasound probe whose three-dimensional position is tracked by an optical camera. These images are used to register the surface model to the anatomy of the patient in the operating room. An initial alignment is obtained by matching at least three points manually selected on the model to their corresponding points identified on the ultrasound images. The registration is then improved with an iterative closest point or a sequential least squares estimation technique. In the present study the registration results of these techniques are compared. After the registration, surgical instruments are displayed relative to the surface model of the patient on a graphical screen visible to the surgeon. Results of laboratory experiments on a shoulder phantom indicate acceptable registration results and sufficiently fast overall system performance to be applicable in the operating room.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Freehand 3D ultrasound allows intra-operative imaging of volumes of interest in a fast and flexible way. However, the ultrasound device must be calibrated before it can be registered with other imaging modalities. We present a needle-fiducial based electromagnetic localization approach for calibrating freehand 3D ultrasound as a prerequisite for creating an intra-operative navigation system. Although most existing calibration methods require a complex and tedious experiment using a customized calibration phantom, our method does not. The calibration set-up requires only a container of water and only several frames (three to nine) to detect an electromagnetically tracked needle tip in a 2D ultrasound image. The tracked needle is dipped into the water and moved freehand to locate the tip in the ultrasound imaging plane. The images that show the needle tip are recorded and the coordinates are manually or automatically identified. For each frame, the pixel indices, as well as the discrete coordinates of the tracker and the needle, are used as the inputs, and the calibration matrix is reconstructed. Three group positions, each with nine frames, are recorded for calibration and validation. Despite the lower accuracy of the electromagnetic tracking device compared to optical tracking devices, the maximum RMS error for calibration is 1.22mm with six or more frames, which shows that our proposed approach is accurate and feasible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a new robust method for 2D and 3D ultrasound (US) probe calibration using a closed-form solution. Prior to calibration, a position sensor is attached to the probe and is used to tag each image/volume with its position and orientation in space. At the same time, image information used to determine target location in probe coordinates. The calibration procedure uses these two pieces of information to determine the transformation (translation, rotation, and scaling) of the scan plane with respect to the position sensor. We introduce a novel methodology for real-time in-vivo quality control of tracked US systems, in order to capture registration failures during the clinical procedure. In effect, we dynamically recalibrate the tracked US system for rotation, scale factor, and in-plane position offset up to a scale factor. We detect any unexpected change in these parameters through capturing discrepancies in the resulting calibration matrix, thereby assuring quality (accuracy and consistency) of the tracked system. No phantom is used for the recalibration. We perform the task of quality control in the background, transparently to the clinical user while the subject is being scanned. We present the concept, mathematical formulation, and experimental evaluation in-vitro. This new method can play an important role in guaranteeing accurate, consistent, and reliable performance of tracked ultrasound.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visualization and image processing of medical datasets has become an essential task for clinical diagnosis support as well as for treatment planning. In order to enable a physician to use and evaluate algorithms within a clinical setting, easily applicable software prototypes with a dedicated user interface are essential. However, substantial programming knowledge is still required today when using powerful open source libraries such as the Visualization Toolkit (VTK) or the Insight Toolkit (ITK). Moreover, these toolkits provide only limited graphical user interface functionality. In this paper, we present the visual programming and rapid prototyping platform MeVisLab which provides flexible and simple handling of visualization and image processing algorithms of VTK/ITK, Open Inventor and the MeVis Image Library by modular visual programming. No programming knowledge is required to set up image processing and visualization pipelines. Complete applications including user interfaces can be easily built within a general framework. In addition to the VTK/ITK features, MeVisLab provides a full integration of the Open Inventor library and offers a state-of-the-art integrated volume renderer. The integration of VTK/ITK algorithms is performed automatically: an XML structure is created from the toolkits' source code followed by an automatic module generation from this XML description. Thus, MeVisLab offers a one stop solution integrating VTK/ITK as modules and is suited for rapid prototyping as well as for teaching medical visualization and image analysis. The VTK/ITK integration is available as package of the free version of MeVisLab.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dual-modality imaging scanners combining functional PET and anatomical CT constitute a challenge in volumetric visualization that can be limited by the high computational demand and expense. This study aims at providing physicians with multi-dimensional visualization tools, in order to navigate and manipulate the data running on a consumer PC. We have maximized the utilization of pixel-shader architecture of the low-cost graphic hardware and the texture-based volume rendering to provide visualization tools with high degree of interactivity. All the software was developed using OpenGL and Silicon Graphics Inc. Volumizer, tested on a Pentium mobile CPU on a PC notebook with 64M graphic memory. We render the individual modalities separately, and performing real-time per-voxel fusion. We designed a novel "alpha-spike" transfer function to interactively identify structure of interest from volume rendering of PET/CT. This works by assigning a non-linear opacity to the voxels, thus, allowing the physician to selectively eliminate or reveal information from the PET/CT volumes. As the PET and CT are rendered independently, manipulations can be applied to individual volumes, for instance, the application of transfer function to CT to reveal the lung boundary while adjusting the fusion ration between the CT and PET to enhance the contrast of a tumour region, with the resultant manipulated data sets fused together in real-time as the adjustments are made. In addition to conventional navigation and manipulation tools, such as scaling, LUT, volume slicing, and others, our strategy permits efficient visualization of PET/CT volume rendering which can potentially aid in interpretation and diagnosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Volume rendering has high utility in visualization of segmented datasets. However, volume rendering of the segmented labels along with the original data causes undesirable intermixing/bleeding artifacts arising from interpolation at the sharp boundaries. This issue is further amplified in 3D textures based volume rendering due to the inaccessibility of the interpolation stage. We present an approach which helps minimize intermixing artifacts while maintaining the high performance of 3D texture based volume rendering - both of which are critical for intra-operative visualization. Our approach uses a 2D transfer function based classification scheme where label distinction is achieved through an encoding that generates unique gradient values for labels. This helps ensure that labelled voxels always map to distinct regions in the 2D transfer function, irrespective of interpolation. In contrast to previously reported algorithms, our algorithm does not require multiple passes for rendering and supports greater than 4 masks. It also allows for real-time modification of the colors/opacities of the segmented structures along with the original data. Additionally, these capabilities are available with minimal texture memory requirements amongst comparable algorithms. Results are presented on clinical and phantom data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional texture-based volume rendering is a technique that treats a 3D volume as a 3D texture, renders multiple 2D view-oriented slices and blends them into the frame buffers. This technique is thoroughly developed in computer graphics and medical visualization, and widely accepted due to the advancement of computer hardware. This research aims at developing fast parallel slice cutting and partial exposing algorithms used in real-time 3D-texture-based volume rendering for image-guided surgery and therapy planning. In texture-based volume rendering, a large amount of slices are needed to render the volume to achieve high quality image, but for real-time interactive volume rendering, the computation time is critical. Instead of repeating the cutting algorithms for each slice against the volume data as conventional cutting algorithms do, the slice cutting algorithm developed in this paper applies the cutting only to the initial slice, and gets the slice vertexes and 3D texture coordinates for all the others based on the distance between the current slice and the initial slice. The new algorithm dramatically reduces the computation time for slice cutting, and eases the generation of sectional view for a volume. Partial exposing is another useful technique used in volume visualization to reveal important but hidden information. Two depth-based partial exposing algorithms are developed and implemented in this paper. Both partial exposing techniques can work with arbitrary complex, but convex, shapes of cutaway object, and their implementations maintain the interactive frame rate for 3D texture-based volume rendering without apparent performance decline compared to non-cutaway rendering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a visualization tool for the reporting of organ tumors such as lung nodules. It provides a 3D visual summary of all the detected and segmented tumors and allows the user to navigate through the display. The detected and segmented nodules are displayed, using surface rendering to show their shapes and relative sizes. Anatomic features are used as references. In this implementation, the two lung surfaces are rendered semi-transparent as the visual reference. However, other references could be used, such as the thoracic cage, airways, or vessel trees. The display is of 3D nature, meaning that user can rotate the objects as a whole, view the display at different angles. The user can also zoom the display at will to see an enlarged view of a nodule. The 3D display is spatially synchronized with the main window that displays the volume data. A click on a nodule in the 3D display will update the main display to the corresponding slice where the nodule is located, and the nodule location will be outlined in the slice that is shown in the main widow. This is a general reporting tool that can be applied to all oncology applications using all modalities, whenever the segmentation and detection of tumors are essential. It can also be extended as a visualization tool for combinatorial reporting of all relevant pathologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Direct volumetric visualization of medical datasets has important application in areas such as minimally-invasive therapies and surgical simulation. In popular fixed-slice-distance hardware-based volume rendering algorithms, such as 2D and 3D texture mapping, the non-isotropic nature of the volumetric medical images and the constantly changed viewing rays make it difficult to render medical datasets without disturbing or slicing artifacts during volume rotation. We have developed a hardware accelerated 3D medical image visualization system based on a commodity graphics unit, in which a viewing-direction based dynamic texture slice resampling scheme is descirbed and implemented on an Nvidia graphics processing unit (GPU). In our algorithm, we utilize graphics hardware to dynamically slice the volume texture according to the viewing directions during the rendering process, in which the slice number can be dynamically changed without consuming additional video memory. Near-uniform effective slice spacing can be achieved in real-time and updated as the viewing angles change, so improved uniform visual quality is achieved with high rendering performance. To further improve rendering efficiency, we have implemented a multi-resolution scheme within our rendering system, which offers the user the option to highlight the volume of interest (VOI) and render it with higher resolution than the surrounding structures. This system also incorporates a fragment-level interactive post-classification algorithm that modifies the texture directly within the texture unit on graphics card, making it possible to interactively change transfer function parameters and navigate medical datasets in real-time during the 3D medical image visualization process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Direct volume rendering via consumer PC hardware has become an efficient tool for volume visualization. In particular, the volumetric ray casting, the most frequently used volume rendering technique, can be implemented by the shading language integrated with graphical processing units (GPU). However, to produce high-quality images offered by GPU-based volume rendering, a higher sampling rate is usually required. In this paper, we present an algorithm to generate high quality images with a small number of slices by utilizing displaced pixel shading technique. Instead of sampling points along a ray with the regular interval, the actual surface location is calculated by the linear interpolation between the outer and inner points, and this location is used as the displaced pixel for the iso-surface illumination. Multi-pass and early Z-culling techniques are applied to improve the rendering speed. The first pass simply locates and stores the exact surface depth of each ray using a few pixel instructions; then, the second pass uses instructions to shade the surface at the previous position. A new 3D edge detector from our previous research is integrated to provide more realistic rendering results compared with the widely used gradient normal estimator. To implement our algorithm, we have made a program named DirectView based on DirectX 9.0c and Microsoft High Level Shading Language (HLSL) for volume rendering. We tested two data sets and discovered that our algorithm can generate smoother and more accurate shading images with a small number of intermediate slices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.