We present a new method for modeling organ deformations due to successive resections. We use a biomechanical model of the organ, compute its volume-displacement solution based on the eXtended Finite Element Method (XFEM). The key feature of XFEM is that material discontinuities induced by every new resection can be handled
without remeshing or mesh adaptation, as would be required by the conventional Finite Element Method (FEM). We focus on the application of preoperative image updating for image-guided surgery. Proof-of-concept demonstrations are shown for synthetic and real data in the context of neurosurgery.
This study looks into the rigid-body registration of pre-operative anatomical high field and interventional low field magnetic resonance images (MRI). The accurate 3D registration of these modalities is required to enhance the content of interventional images with anatomical (CT, high field MRI, DTI), functional (DWI, fMRI, PWI), metabolic (PET) or angiography (CTA, MRA) pre-operative images. The specific design of the interventional MRI scanner used in the present study, a PoleStar N20, induces image artifacts, such as ellipsoidal masking and intensity inhomogeneities, which affect registration performance. On MRI data from eleven patients, who underwent resection of a brain tumor, we quantitatively evaluated the effects of artifacts in the image registration process based on a normalized mutual information (NMI) metric criterion. The results show that the quality of alignment of pre-operative anatomical and interventional images strongly depends on pre-processing carried out prior to registration. The registration results scored the highest in visual evaluation only if intensity variations and masking were considered in image registration. We conclude that the alignment of anatomical high field MRI and PoleStar interventional images is the most accurate when the PoleStar's induced image artifacts are corrected for before registration.
We present a status report on our work in nonrigid registration and multimodality fusion for neurosurgery. The
new features are the ability to perform registration using heterogeneous brain models and to perform fusion
in 3D. We describe the various elements of the system, the experiments performed, the results obtained, and the
lessons learned.
Our goal is to fuse multimodality imagery to enhance image-guided neurosurgery. Images that need to be fused must be registered.
Registration becomes a challenge when the imaged object deforms between the times the images to be fused are taken. This is the case when 'brain-shift' occurs. We begin by describing our strategy
for nonrigid registration via finite-element methods. Then, we independently discuss an image fusion strategy based on a model of the human visual system. We illustrate the operation of many components of the registration system and the operation of the fusion system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.