PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
We have recently proposed a new statistical approach based on active contours (snakes) for the segmentation of a unique object in an image. In this article, we address the case of an image composed of several different regions. We propose an extension of snake to a deformable partition of the image with a fixed number of regions. This active grid allows a semi-supervised segmentation of the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Morphological granulometries have been used to successfully discriminate textures in the context of classical feature-based classification. The features are typically the granulometric moments resulting from the pattern spectrum of the random image. This paper takes a different approach and uses the granulometric moments as inputs to a linear system that has been derived by classical optimization techniques for linear filters. The output of the system in a set of estimators that estimate the parameters of the model governing the distribution of the random set. These model parameters are assumed to be random variables possessing a prior distribution, so that the linear filter estimates these random variables based on granulometric moments. The methodology is applied to estimating the primary grain and intensity of a random Boolean model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Providing an in-vivo and non-invasive tool for 3D reconstruction of anatomical tree structures (vascular networks and bronchial tree) from 2D or pseudo-3D data acquisition remains today a key and challenging issue for computer vision in medical imaging. In this paper, we address this issue within the specific framework of airways. Our contribution consists of a realistic 3D modeling of the bronchial tree structure. Mathematical and physical principles here involved refer to 3D mathematical morphology (3DMM), Diffusion Limited Aggregation (DLA), energy-based modeling and fractal representations. Here, a model-based 3D reconstruction of the bronchial tree is achieved in a fully-automated way. The tree segmentation is performed by applying a DLA-based propagation. The initialization results from the 3DMM procedure. Energy modeling and fractals are used to overcome the well- known cases of subdivision ambiguities and artifact generation related to such a complex topological structure. Therefore, the proposed method is robust with respect to anatomical variabilities. The 3D bronchial tree reconstruction is finally visualized by using a semi-transparent volume rendering technique which provides brochogram- like representations. The developed method was applied to a data set acquired within a clinical framework by using both double- and multiple- detector CT scanners (5 patients corresponding to 1500 axial slices, including both normal and strong pathological cases). Results thus obtained, compared with a previously-developed 2D/3D technique, show significant improvements and accuracy increase of the 3D reconstructions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In inverse reconstruction, there are often cases where the volume or image to be reconstructed takes only a finite number of possible values. By explicitly modeling such information, discrete tomography aims to achieve better reconstruction quality and accuracy for these cases. The paper attempts to develop a framework for a general discrete tomography problem. The approach starts with an explicit model of the discreteness using a Bayesian formula. Class label variables are defined to denote the probabilities of each object point belonging to one particular class. The reconstruction then becomes the problem of assigning labels to each object point in the volume (3D) or image (2D) to be reconstructed. Unsurprisingly, this Bayesian labeling process resembles a segmentation process whose goal is also to estimate a discrete-valued field from continuous-valued observations. An expectation-maximization (EM) algorithm is developed to estimate the class label variables. By introducing another set of variables, the EM algorithm iteratively alternates the estimations of these two sets of variables. A linear equation is finally derived, composed of two terms. One accounts for the effect of the discreteness, and the other represents the integral property of the projection in tomography. This linear equation reveals a very interesting relationship between the discrete tomography and ordinary tomography, suggesting that the ordinary tomography may be treated as a special case of discrete tomography where the discreteness term is neglected. Solving the linear equation is usually very computational. This paper, however, derives an efficient algorithm by using concepts developed previously in the rho-filtered layergram method for conventional tomography. With the proposed high-pass filter, the solution for the linear equation can be computed very efficiently in the Fourier domain. In case the class values are unknown in advance, another level of EM algorithm is invoked to estimate these class values. The paper also discusses a Markov random field (MRF) model for non-stationary a priori probability, encouraging the local regularity and smoothness in the reconstruction. The experimental results demonstrate that discrete tomography using the proposed method improves the reconstruction quality greatly, especially when fewer projections are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article presents a methodology for analyzing the Lagrangian structure of fluid flows generated by the evolution of cloud systems in meteorological multispectral image sequences. The correlation between the orientation of cloud texture and the underlying motion field Lagrangian component allows to adopt a static strategy. Following a scale-space approach, we therefore first construct a non-local robust estimator for the locally dominant orientation field in an image. This estimator, which is derived from the image structure tensor, is relevant in both mono- and multisprectral contexts. In a second step, the Lagrangian component of the flow is estimated over some bounded image region by robustly fitting a hierarchical vector parametric model to the dominant orientation field. Here, a recurrent problem deals with adaptating the geometry of the model support to obtain unbiased estimates. To tackle this classic issue, we introduce a novel variational, semi-parametric approach which allows the joint optimization of model parameters and support. This approach is generic and, in particular, can be readily applied to motion estimation yielding robust measurement of the Eulerian structure of the flow. Finally, a structural characterization of the reflecting vector field is derived by means of classic differential geometry techniques. This methodology is applied to the analysis of temperated latitude depressions in Meteosat images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the issue of computer vision-based face motion capture as an alternative to physical sensor-based technologies. The proposed method combines a deformable template-based tracking of mouth and eyes in arbitrary video sequences with a single speaking person with a global 3D head pose estimation procedure yielding robust initializations. Mathematical principles underlying deformable template matching together with definition and extraction of salient image features are presented. Specifically, interpolating cubic B-splines between the MPEG-4 Face Animation Parameters (FAPs) associated with the mouth and eyes are used as template parameterization. Modeling the template a network of springs interconnecting with the mouth and eyes FAPs, the internal energy is expressed as a combination of elastic and symmetry local constraints. The external energy function, which allows to enforce interactions with image data, involves contour, texture and topography properties properly combined within robust potential functions. Template matching is achieved by applying the downhill simplex method for minimizing the global energy cost. Stability and accuracy of the results are discussed on a set of 2000 frames corresponding to 5 video sequences of speaking people.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The diffeomorphism model so useful in the biomathematics of normal morphological variability and disease is inappropriate for applications in embryogenesis, where whole coordinate patches are created out of single points. For this application we need a suitable algebra for the creation of something from nothing in a carefully organized geometry: a formalism for parameterizing discrete nondifferentiabilities of invertible functions on Rk, k $GTR 1. One easy way to begin is via the inverse of the development map - call it the dedevelopment map, the deformation backwards in time. Extrapolated, this map will inevitably have singularities at which its derivative is zero. When the dedevelopment map is inverted to face forward in time, the singularities become appropriately isolated infinities of derivative. We have recently introduced growth visualizations via extrapolations to the isolated singularities at which only one directional derivative is zero. Maps inverse to these create new coordinate patches directionally rather than radically. The most generic singularity that suits this purpose is the crease f(x,y) equals (x,x2y+y3), which has already been applied in morphometrics for the description of focal morphogenetic phenomena. We apply it to embryogenesis in the form of its analytic inverse, and demonstrate its power using a priceless new data set of mouse embryos imaged in 3D by micro-MR with voxels smaller than 100micrometers 3.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we describe the enormous potential that multilinear models hold for the analysis of data from neuroimaging experiments that rely on functional magnetic resonance imaging (MRI) or other imaging modalities. A case is made for why one might fully expect that the successful introduction of these models to the neuroscience community could define the next generation of structure-seeking paradigms in the area. In spite of the potential for immediate application, there is much to do from the perspective of statistical science. That is, although multilinear models have already been particularly successful in chemistry and psychology, relatively little is known about their statistical properties. To that end, our research group at the University of Kentucky has made significant progress. In particular, we are in the process of developing formal influence measures for multilinear methods as well as associated classification models and effective implementations. We believe that these problems will be among the most important and useful to the scientific community. Details are presented herein and an application is given in the context of facial emotion processing experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The geometric deformable model (GDM) determines object boundaries by evolving initial interfaces along the normal direction. A speed function controls how fast the interfaces move. When the speed function is zero or sufficiently small, the evolution stops or slows down significantly. Because the gradient flow equation that governs a GDM's evolution can be easily implemented with the level set technique, the GDM has the distinct advantage of being topologically flexible. Since its inception, the GDM has been successfully applied to many applications in medical imaging where variable geometry and topology of the model is crucial. Although much work has been done to improve and extend this method, little attention has been paid to the formulation of the speed function. Most existing GDMs use a fixed form of speed function for all applications. They also don't explicitly take noise into consideration. In this paper, we address these problems by formalizing the meaning of speed function. We believe that the speed of interface evolution should be determined by the confidence (or lack of) that the interface is on the boundary of interest. We describe two new speed functions based on this concept and demonstrate their effectiveness with both simulated and actual medical data. Our results show that the new speed functions are less sensitive to noise, allow faster evolution, and provide a better stopping power.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In view of treatment effects of cosmetics, quality management becomes more and more important. Due to the efficiency reasons it is desirable to quantify these effects and predict them as a function of time. For this, a mathematical model of the skin's surface (epidermis) is needed. Such a model cannot be worked out purely analytically. It can only be derived with the help of measurement data. The signals of interest as output of different measurement devices consist of two parts: noise of high (spatial) frequencies (stochastic signal) and periodic functions (deterministic signal) of low (spatial) frequencies. Both parts can be separated by correlation analysis. The paper introduces in addition to the Fourier Transform (FT) with the Wavelet Transform (WT), a brand new, highly sophisticated method with excellent properties for both modeling the skin's surface as well as evaluating treatment effects. Its main physical advantage is (in comparison to the FT) that local irregularities in the measurement signal (e.g. by scars) remain at their place and are not represented as mean square values as it is the case when applying the FT. The method has just now been installed in industry and will there be used in connection with a new in vivo measurement device for quality control of cosmetic products. As texture parameter for an integral description of the human skin the fractal dimension D is used which is appropriate for classification of different skin regions and treatment effects as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Beyond their involvement in ordinary surface rendering, the boundaries of organs in medical images have differential properties that make them quite useful for quantitative understanding. In particular, their geometry affords a framework for navigating the original solid, representing its R3 contents quite flexibility as multiple pseudovolumes R2 x T, where T is ar eal-valued parameter standing for screen time. A navigation is a smoothly parameterized series of image sections characterized by normal direction, centerpoint, scale and orientation. Such filmstrips represent a radical generalization of conventional medical image dynamics. The lances encountered in these navigations can be represented by constructs from classic differential geometry. Sequences of plane sections can be formalized as continuous pencils of planes, sets of cardinality (infinity) 1 that are sometimes explicitly characterized by a real-value parameter and sometimes defined implicitly as the intersection (curve of common elements) of a pair of bundles of (infinity) 2 planes. An example of the first type of navigation is the pencil of planes through the tangent line at one point of a curve; of the second type, the cone of planes through a point tangent to a surface. The further enhancements of centering, orienting, and rescaling in the medical context are intended to leave landmark points or boundary intersections invariant on the screen. Edgewarp, a publicly available software package, allows free play with pencils of planes like these as they section one single enormous medical data resource, the Visible Human data sets from the National Library of Medicine. This paper argues the relative merits of such visualizations over conventional surface-rendered flybys for understanding and communication of associated anatomical knowledge.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fully automated algorithm is presented that computes the epicardial and endocaridal borders of the left ventricle of the heat for an echocardiographic image sequence acquired from the apical 4-chamber view. The method is tested for agreement against borders drawn by an expert on a prospective patient database of 68 patients. Since two cycles were acquired for all but 1 patient, a total of 135 image sequences were considered. The mean differences bewteen expert and computer generated endocardial borders were 5.13mm and 5.81mm at the end diastole and end systole, respectively. The mean differences between the epicardial borders at the end diastole and end systole were 5.09mm and 5.22mm respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quantification of myocardial blood flow is useful for determining the functional severity of coronary artery lesions. With advances in MR imaging it has become possible to assess myocardial perfusion and blood flow in a non-invasive manner by rapid serial imaging following injection of contrast agent. To date most approaches reported in the literature relied mostly on deriving relative indices of myocardial perfusion directly from the measured signal intensity curves. The central volume principle on the other hand states that it is possible to derive absolute myocardial blood flow from the tissue impulse response. Because of the sensitivity involved in deconvolution due to noise in measured data, conventional methods are sub-optimal, hence, we propose to use stochastic time series modeling techniques like ARMA to obtain a robust impulse response estimate. It is shown that these methods when applied for the optical estimation of the transfer function give accurate estimates of myocardial blood flow. The most significant advantage of this approach, compared with compartmental tracer kinetic models, is the use of a minimum set of prior assumptions on data. The bottleneck in assessing myocardial blood flow, does not lie in the MRI acquisition, but rather in the effort or time for post processing. It is anticipated that the very limited requirements for user input and interaction will be of significant advantage for the clinical application of these methods. The proposed methods are validated by comparison with mean blood flow measurements obtained from radio-isotope labeled microspheres.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a system of PDEs for image restoration consisting of an anisotropic diffusion equation driven by a diffusion tensor which governs the direction and the speed of the diffusion. The structure of the diffusion tensor depends on the gradient on the image obtained from a coupled time-delay regularization equation. The diffusion resulting from this model is isotropic inside a homogeneous region and anisotropic along its boundary. Experimental results are given to show its effectiveness in tracking edges, recovering images with high levels of noise, and enhancing coherent structures. The existence, uniqueness and stability for the solutions of the PDEs are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The reliable detection of objects of interest in images with inhomogeneous or textured background is a typical detection and recognition problem in many practical applications such as the medical and industrial diagnostic imaging. In this paper, a method for object detection is described in the framework of a visual attention mechanism based on the concept of multi-scale relevance function. The relevance function is an image local operator that has local maxima at centers of location of supposed objects of interest or their relevant parts termed as primitive objects. The visual attention mechanism based on the relevance function provides the following advantageous features in object detection. The model-based approach is used which exploits multi- scale morphological representation of objects (as object support regions in images) and regression representation of their intensity in order to perform time-effective image analysis. The introduced multi-scale relevance function in application to object detection provides a quick location of local objects of interest invariantly to object size and orientation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ordered subsets (OS) algorithm1 has enjoyed considerable interest for accelerating the well-known EM reconstruction algorithm for emission tomography and has recently found widespread use in clinical practice. This is primarily due to the fact that, while retaining the advantages of EM, the OS-EM algorithm can be easily implemented by slightly modifying the existing EM algorithm. The OS algorithm has also been applied1 with the one-step-late (OSL) algorithm,2 which provides maximum a posteriori estimation based on Gibbs priors. Unfortunately, however, the OSL approach is known to be unstable when the smoothing parameter that weights the prior relative to the likelihood is relatively large. In this work, we note that the OS principle can be applied to any algorithm that involves calculation of a sum over project indices, and show that it can also be applied to a generalized EM algorithm with useful quadratic priors. In this case, the algorithm is given in the form of iterated conditional modes (ICM), which is essentially a coordinate-wise descent method, and provides a number of important advantages. We also show that, by scaling the smoothing parameter in a principled way, the degree of smoothness is reconstructed images, which appears to vary depending on the number of subsets, can be efficiently matched for different numbers of subsets. Our experimental results indicate that the OS-ICM algorithm along with the method of scaling the smoothing parameter provides robust results as well as a substantial acceleration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In image restoration and reconstruction applications, unconstrained Krylov subspace methods represent an attractive approach for computing approximate solutions. They are fast, but unfortunately they do not produce approximate solutions preserving nonnegativity. As a consequence the error of the computed approximate solution can be large. Enforcing a nonnegativity constraint can produce much more accurate approximate solutions, but can also be computationally expensive. This paper considers a nonnegativity constrained minimization algorithm which represents a variant of an algorithm proposed by Kaufman. Numerical experiments show that the algorithm can be more accurate and computationally competitive with unconstrained Krylov subspace methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a method for reconstructing two dimensional profiles from positron emission tomography (PET) emission data. Unlike the usual methods which assume the detectors are small, the method is based on an accurate system response model which is valid for PET detectors of arbitrary size. The emission profile is represented as an orthogonal series of basis functions (on the circular region inside the scanner ring) which are tensor products of Bessel functions in the radial direction, and classical harmonics in the angular direction. By applying a simple linear transformation to the emission sinogram, it can be viewed as data corresponding to detector arcs, and represented by basis functions involving Chebyshev polynomials and classical harmonics. The coefficients in the orthogonal series for the emission intensity are obtained by solving a block diagonal linear system in which the data vector consists of the coefficients of the orthogonal series of the transformed sinogram. The coefficient matrix of this system is obtained from an orthogonal series representation of the probability that an emission is detected in one of the detector arcs. The mathematical description of this probability and the reconstruction method was developed in a recent paper by the authors. In this paper we discuss details of the numerical implementation and present numerical results obtained from applying this method to simulated data. The results indicate that our method produces reconstructions which are comparable in quality to the maximum likelihood expectation maximization (MLEM) method wiht a speed which is similar that of the filtered back projection (FBP) method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In medical imaging applications, 3D morphological data set is often presented in 2D format without considering visual perspective. Without perspective, the resulting image can be counterintuitive to natural human visual perception, specially in a setting of MR guided neurosurgical procedure where depth perception is crucial. To address this problem we have developed a new projection scheme that incorporates linear perspective transformation in various image reconstructions, including MR angiographical projection. In the scheme, an imaginary picture plane (PP) can be placed within or immediately in front of a 3D object, and the stand point (SP) of an observer is fixed at a normal viewing distance os 25 cm in front of the picture plane. A clinical 3D angiography data set (TR/TF/Flipequals30/5.4/15) was obtained from a patient head on a 1.5T MR scanner in 4 min 10 sec (87.5% rectangular, 52% scan). The length, width and height of the image volume were 200mm, 200mm and 72.4mm respectively, corresponding to an effective matrix size of 236x512x44 in transverse orientation (512x512x88 after interpolation). Maximum intensity project (MaxIP) algorithm was used along the viewing trace of perspective projection than rather the parallel projection. Consecutive 36 views were obtained at a 10 degree interval azimuthally. When displayed in cine-mode, the new MaxIP images appeared realistic with an improved depth perception.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we provide a rigorous mathematical foundation for Tuned- Aperture Computed Tomography, a generalization of standard tomosynthesis that provides a significantly more flexible diagnostic tool. We also describe how the general TACT algorithm simplifies in important special cases, and we investigate the possibility of optimizing the algorithm by reducing the number of fiducial reference points. The key theoretical problem is how to sue information within an x-ray image to discover, after the fact, what the relative positions of the x-ray source, the patient, and the x-ray detector were when the x-ray image was created.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional filtering methods operate on the entire signal or image. In some applications, however, errors are concentrated in specific regions or features. A prime example is images generated using computed tomography. Practical implementations limit the amount of high frequency content in the reconstructed image, and consequently, edges are blurred. We introduce a new post-reconstruction edge enhancement algorithm, based on the reassignment principle and wavelets, that localizes its sharpening exclusively to edge features. Our method enhances edges without disturbing the low frequency textural details.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interaction with image databases is facilitated by using example images in a query. Query-by-example often requires a comparison of the features in the query image with features of the database image. The appropriate comparison function need not be the Euclidean distance between the two features - several non-Euclidean similarity measures have been shown to be visual more appropriate. This paper considers the problem of efficient retrieval of images using such similarity measures. A classical k-d tree based indexing algorithm is extended to such similarity measures and experimental performance evaluation of the algorithm is also provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new implementation of the iterative closest point method for automatic registration is introduced. It is designed to unify three-dimensional models of the coronary artery tree created from biplane angiograms with three-dimensional models of the left ventricular epicardial surface created from perfusion SPECT. The speed and efficacy of the technique is evaluated using simulations and five patient studies. The technique is shown to be fast and quantitatively accurate in the simulations; evaluations of the results in patients are also satisfactory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.