PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Digital moments approximate real moments where the accuracy depends upon grid resolution. There are theoretical results about the speed of convergence. However, there is a lack of more detailed studies with respect to selected shapes of regions, or with respect to experimental data about convergence. This paper discusses moments for specific shapes of regions, and provides some initial experimental data about measured convergence of digital moments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We derive a digital version of snakes for the extraction of the boundary of discrete images as a variational problem in the digital space. The method of the snakes extracts the boundary of a region deforming the boundary curves and surfaces dynamically. In this paper, we propose a digital version of this variational problem for boundary detection. Since we deal with the optimization for a functional for curvatures of points on the boundary, we first define the curvature indices of verteces for discrete objects. Second using these indices, we define the principal normal vectors of discrete curves and surfaces. These definitions permit us to derive a discrete snakes, since the minimization criterion of the snakes is defined using the curvatures of points on the boundary. Furthermore, we prove the digital boundary detected by mathematical morphology is derived as the solution of this digital variational problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Given a set of lines, line grouping considers the problem of deciding which lines are likely to belong to the same object or to a set of similar objects, before any recognition of objects has actually taken place. Vision scientists have suggested a number of factors that may be involved in the grouping process of lines, among which proximity, parallelism and collinearity are the easiest to quantify. These properties have often been measured by empirical estimates. Previous work, however, has shown that it is also possible to follow a more systematic approach based upon the uncertainty of pixel positions. Thus we can give precise definitions regarding the parallelism, collinearity or concurrency of lines whose parameters are only known to lie within given regions in the parameter space of lines. In this work we generalize this framework and show how it can be used during an entire line grouping process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The cross-section topology is a framework that extends the notions of digital topology to the case of grayscale images. In this framework, the use of 4- and 8-connectedness induces geometric constraints in topological operators such as thinning operators. We propose a general method to obtain better results by reducing the anisotropy of such operators. We illustrate the method with an application to the segmentation of human head MRI images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several approaches have been proposed for the study of topological properties of binary digital images: the digital topology, the connected ordered topological space approach, and the cell complex approach. The topology which is used in the last two approaches is a discrete topology. In fact, the datum of a discrete topology is equivalent to the datum of a partially ordered set (order). One of the authors has proposed a study of homotopy in orders, and has introduced the notion of (alpha) -simple point. An (alpha) -simple point is an 'inessential point' for the topology in orders. A fundamental characteristic concerning (alpha) -simple points is that they can be deleted in parallel from an object, while preserving its topological properties. This led us to propose, in a recent work, two parallel thinning algorithms based on the parallel deletion of (alpha) -simple points, for 2D and 3D binary images. Very few parallel thinning algorithms for 2D grayscale images have already been proposed. The most recent ones have been developed by extending binary notions with the cross-section topology. In the same way, we extend our previous works in orders to the case of 2D grayscale images, and we propose two parallel thinning algorithms for such images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A digital vision system and the computational algorithms used by the system for three-dimensional (3D) model acquisition are described. The system is named Stonybrook VIsion System (SVIS). The system can acquire the 3D model (which includes the 3D shape and the corresponding image texture) of a simple object within a 300 mm X 300 mm X 300 mm volume placed about 600 mm from the system. SVIS integrates Image Focus Analysis (IFA) and Stereo Image Analysis (SIA) techniques for 3D shape and image texture recovery. First, 4 to 8 partial 3D models of the object are obtained from 4 to 8 views of the object. The partial models are then integrated to obtain a complete model of the object. The complete model is displayed using a 3D graphics rendering software (Apple's QuickDraw). Experimental results on several objects are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a new method called a 3D box method for estimating the shape of an object from multi-views. This method is useful in those fields, which are needed to recover the 3D information from their images, such as in the field of industries, medical sciences, etc. Concept of voting and counting votes from different image views is used to solve the inverse problems of 3D reconstruction of an object from 2D image. The straight line drawn from the lens center of the calibrated camera to an image point of object's silhouettes casts a single vote in each 3D box on its way if it is extended in space. Images are taken from thirty-six positions of a calibrated camera dividing them into four groups as 3 X 3 cameras in each group, which are positioned on each side of the object. The technique of camera position grouping is used to solve the concavity problems by the detection of occlusion using stereo method and acquiring occlusion free depth. To reduce the large memory size needed to solve in a single-stage, a multi-stage algorithm is developed for getting the accurate shape of the 3D object. Computer simulations are conducted to demonstrate the performance of the algorithm on images of various shapes of objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, Computerized Tomography cuberille grid medical data is used. The common problem of volumetric image data is that the amount of data is too much. to reduce volumetric data, wavelet transformation is already used to evaluate importance for each data. We treat a volumetric medical image data as the coefficients corresponding to a three-dimensional piecewise constant basis functions of wavelet transformation. Then, we omit the coefficients with smallest magnitude until the difference between the gray value of original data and represented data is less than the tolerance. If we select only the important data, original cuberille data becomes scattered data. In this case, Marching Cubes algorithm is no longer applicable. Even though Marching cubes is one of the most successful voxel based algorithm, the input data should be cuberille grid data instead of scattered data. In real world, many applicable data are available in a manner of scattered data rather than cuberille grid data. A tetrahedrization is the one of pre-processing steps for trivariate scattered data interpolation. The quality of a piecewise linear data points in four-dimensional space depends not only on the distribution of the data in three-dimensional space, but also on the data value. This paper discusses data dependent criteria: (1) least squares fitting, (2) gradient difference, and (3) jump in normal direction derivatives. A simulated annealing algorithm is used to achieve the global optimum for a wide class of optimization criteria. The results of trivariated scattered data interpolation is visualized through an iso-surface rendering. A Geomview package is used in Linux flatform on PC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of multigrid convergent surface area measurement came with the advent of computer-based image analysis. The paper proposes a classification scheme of local and global polyhedrization approaches which allows us to classify different surface area measurement techniques with respect to the underlying polyhedrization scheme. It is shown that a local polyhedrization technique such as marching cubes is not multigrid convergent towards the true value even for elementary convex regular solids such as cubes, spheres or cylinders. The paper summarizes work on global polyhedrization techniques with experimental results pointing towards correct multigrid convergence. The class of general ellipsoids is suggested to be a test set for such multigrid convergence studies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In biological and computational vision, the perception and description of geometric attributes of surfaces in natural scenes has received a great deal of attention. The physical and geometric properties of surfaces related to their optical characteristics depend on their texture. In previous work, we introduced the concept of the Gestalt of a surface. The Gestalt being a geometric object that retains the mathematical properties of the surface. The Gestalt is determined using the theory of Riemannian foliations from differential topology and the concept of an observer's subjective function in a probabilistic manner. In this paper, we continue our study of geometry of natural surfaces with textures that exhibit statistical regularity at some resolution. It appears that all earlier algorithms in computer vision for extraction of shape attributes from texture have made some (piecewise) smoothness assumption about the surfaces under study. However, natural surfaces, as well as most synthetic ones are not smooth in the mathematical sense. Hence, the domain of applicability of current algorithms is severely limited. We propose algorithms to estimate geometric invariants of textured surfaces that are not necessarily smooth, but possess a statistically regular structure. An important step is based on a learning theoretic method for parameterization of textured surfaces. This method takes into account the statistical texture information in a 2D image of the surface. From a dictionary of geometry for the parameter space, a supervised artificial neural network selects the optimal choice for parameterization. As a result, the algorithms for shape from texture (slant, tilt, curvature...) have a more efficient implementation and a faster runtime. In this paper, we explain the significance of statistically symmetric patterns on surfaces. We show how such texture regularity can be used to solve the linearized problem, leaving the full details of the linearization of the Gestalt of surfaces to a forthcoming paper. The solution of the linearized problem together with algorithms to linearized surface Gestalts provide the desired estimates for the geometric features of natural surfaces with statistically regular textures at some scale.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In Vision Geometry '99 we introduced the Gestalt approach to perceptual approximation of surfaces in natural scenes; that is, as a geometric theory retaining certain mathematical properties of surfaces while adhering to the human perceptual organization of vision. The theory of curves follows the same philosophy, relying on optical properties of physical objects whose features in the scale and resolution -- imposed by the observer -- afford 'a one-dimensional Gestalt.' The Gestalt theory of curves and surfaces is part of the Perceptual Geometry of the natural world that hypothetically evolves within intelligent systems capable of retaining partial information from stimuli in 'memory' and visual 'learning' through 'brain plasticity.' Perceptual geometry aims at explaining geometry from the perspective of visual perception, and in turn, how to apply such geometric findings to the ecological study of vision. Perceptual geometry attempts to answer fundamental questions in perception of form and representation of space through synthesis of cognitive and biological theories of visual perception with geometric theories of the physical world. Algorithms in this theory are typically presented based on a combination of a mathematical formulation of eye-movements and multi-scale multi-resolution filtering. In this paper, methods from statistical pattern recognition are applied to explain the learning-theoretic and perceptual analogs of geometric theory of space and its objects optically defined by curves and surfaces. The human visual system recovers depth from the visual stimuli registered on the two-dimensional surface of the retina by means of a variety of mechanisms, from bottom-up (such as stereopsis and motion parallax) to top-down influences. Perception and representation of space and objects within the visual environment rely on combination of a multitude of such mechanisms. The problem of modeling cortical representation of visual information is, therefore, very complex and challenging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The history of cell complexes is closely related to the birth and development of topology in general. Johann Benedict Listing (1802 - 1882) introduced the term 'topology' into mathematics in a paper published in 1847, and he also defined cell complexes for the first time in a paper published in 1862. Carl Friedrich Gauss (1777 - 1855) is often cited as the one who initiated these ideas, but he did not publish either on topology or on cell complexes. The pioneering work of Leonhard Euler (1707 - 1783) on graphs is also often cited as the birth of topology, and Euler's work was cited by Listing in 1862 as a stimulus for his research on cell complexes. There are different branches in topology which have little in common: point set topology, algebraic topology, differential topology etc. Confusion may arise if just 'topology' is specified, without clarifying the used concept. Topological subjects in mathematics are often related to continuous models, and therefore quite irrelevant to computer based solutions in image analysis. Compared to this, only a minority of topology publications in mathematics addresses discrete spaces which are appropriate for computer-based image analysis. In these cases, often the notion of a cell complex plays a crucial role. This paper briefly reports on a few of these publications. This paper is not intended to cover the very lively progress in cell complex studies within the context of image analysis during the last two decades. Basically it stops its historic review at the time when this subject in image analysis research gained speed in 1980 - 1990. As a general point of view, the paper indicates that image analysis contributes to a fusion of topological concepts, the geometric and the abstract cell structure approach and point set topology, which may lead towards new problems for the study of topologies defined on geometric or abstract cell complexes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A polyhedral object in 3-dimensional space is often well represented by a set of points and line segments that act as its features. By a nice viewpoint of an object we mean a projective view in which all (or most) of the features of the object, relevant for some task, are clearly visible. Such a view is often called a non-degenerate view or projection. In this paper we are concerned with computing non-degenerate orthogonal and perspective projections of sets of points and line segments (objects) in 3-dimensional space. We outline the areas in which such problems arise, discuss recent research on the computational complexity of these problems, illustrate the fundamental ideas used in the design of algorithms for computing non-degenerate projections, and provide pointers to the literature where the results can be found.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surface creases, ridges and ravines, provide us with important information about the shapes of 3D objects and can be intuitively defined as curves on a surface along which the surface bends sharply. Our mathematical description of the ridges and ravines is based on the study of sharp variation points of the surface normals or equivalently, extrema of the principal curvatures along their curvature lines. We explore similarity between image intensity edges (sharp variation points of an image intensity) and curvature extrema of a 3D surface. It allows us to adopt a basic edge detection technique for detection of the ridges and ravines on range images and smooth surfaces approximated by polygonal meshes. Because the ridges and ravines are of high-order differential nature, careful smoothing is required in order to achieve stable detection of perceptually salient ridges and ravines. To detect the ridges and ravines on a range image we use a nonlinear diffusion process acting on the image intensity surface normals. To detect the ridges and ravines on a triangular mesh we use a coupled nonlinear diffusion of mesh normals and vertices. We demonstrate feasibility of the ridges and ravines for segmentation and shape recognition purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Numeric simulations allow the study of physical phenomenon that are impossible or difficult to realize in the real world. As an example, it is not conceivable to cause an atomic explosion or an earthquake for exploring the effects on a building or a flood barrier. To be realistic, this kind of simulation (waves propagation), must take into account all the characteristics of the domain where it takes place, and more particularly the tri-dimensional aspect. Therefore, numericians need not only a three-dimensional model of the domain, but also a meshing of this domain. In the case we use finite differences based methods, this meshing must be hexahedral and regular. Moreover, new developments on the numerical propagation code provides tools for using meshes that interpolate the interior subdivisions of the domain. However, the manual generation of this kind of meshing is a long and difficult process. This is why to improve and simplify this work, we propose a semi-automatic algorithm based on a block subdivision. It makes use of the dissociation between the geometrical and topological aspects. Indeed, with our topological model a regular hexahedral meshing is far easier to generate. This meshing geometry can be supplied by a geometric model, with reconstruction, interpolation or parameterization methods, but it is anyway completely guided by the topological model. The result is a software presently used by the Commissariat a` l'Energie Atomique in several full-size studies, and notably for the framework of the Comprehensive Test Ban Treaty.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The invariance and covariance of extracted features from an object under certain transformation play quite important roles in the fields of pattern recognition and image understanding. For instance, in order to recognize a three dimensional object, we need specific features extracted from a given object. These features should be independent of the pose and the location of an object. To extract such features, the authors have presented the three dimensional vector autoregressive model (3D VAR model). This 3D VAR model is constructed on the quarternion, which is the basis of SU(2) (the rotation group in two dimensional complex space). Then the 3D VAR model is defined by the external products of 3D sequential data and the autoregressive (AR) coefficients, unlike the conventional AR models. Therefore the 3D VAR model has some prominent features. For example, the AR coefficients of the 3D VAR model behave like vectors under any three dimensional rotation. In this paper, we present an effective straightforward algorithm to obtain the 3D VAR coefficients from lower order to higher order recursively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic oil-gas trap prediction is usually based on 2D vertical section data or horizontal slice data. 3D seismic prospecting provides the possibility of using interfaces or digital surfaces to locate parameters that may be used to accomplish oil-gas trap prediction. However, to construct such digital surfaces usually requires computerized interaction between the software system and the interpreter. This paper describes an automatic digital surface reconstruction technique that is based on (lambda) -connected segmentation method, which generates a sequence of digital surfaces to represent the interfaces of seismic layers. The technique uses a surface fitting algorithm that is based on the gradually varied function. This algorithm can also be applied to irregular domains. Reconstructed surfaces are used as the reference surfaces to describe the actual locations of data (parameters) on the interfaces. A modified fuzzy evaluation technique is developed for the oil-gas trap prediction. After the evaluation of every point on all surfaces, a volume that indicates the possibility of oil-gas is built. This makes it possible to use (lambda) -connected searching to extract oil- gas components (traps) in the 3D image. Hence, it allows us to evaluate the size or volume of each trap. This paper also discusses the technique as used in a real oil-gas prediction problem to demonstrate the effectiveness of the concept in 3D seismic image processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Straight lines, rectangles and other simple geometric features are common in man-made environments. Moreover, these geometric features often share particular relationships, for instance parallelism or orthogonality. Such a scene is very constrained, and its 3D description in terms of points is over-determined if the relations are taken into account. Sometimes a constraint solver can maintain the relations, but when estimated positions of the features are unavailable a priori, the knowledge from geometric relations is left unexploited. A better approach would consist in finding a parametric representation that directly merges the relations within a reduced set of parameters, which enforces the relational constraints once and for all. A problem with this idea is that both features and relationships are heterogeneous, so general methods are difficult to design. We propose here a method based on geometric reduction rules for automatically remodeling a scene into such a representation. The method is general for points, linear and planar elements together and can handle at the same time parallelism, orthogonality, colinearity and coplanarity. The number of reduced parameters is equal to the number of degrees of freedom of the system. The approach has been tested with segments, rectangles and points in various scenes, to evaluate the generality and performance of the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new method to transform the spectral pixel information of a micrograph into an affine geometric description, which allows us to analyze the morphology of granular materials. We use spectral and pulse-coupled neural network based segmentation techniques to generate blobs, and a newly developed algorithm to extract dilated contours. A constrained Delaunay tessellation of the contour points results in a triangular mesh. This mesh is the basic ingredient of the Chodal Axis Transform, which provides a morphological decomposition of shapes. Such decomposition allows for grain separation and the efficient computation of the statistical features of granular materials.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an efficient multi-scale shape approximation scheme by adaptively and sparsely discretizing its continuous (or densely sampled) contour by means of points. The notion of shape is intimately related to the notion of contour and, therefore, the efficient representation of the contour of a shape is vital to a computational understanding of the shape. Any discretization of a planar smooth curve by points is equivalent to a piecewise constant approximation of its parameterized X and Y coordinate. Using the Haar wavelet transform for the piecewise approximation yields a hierarchical scheme in which the size of the approximating point set is traded off against the morphological accuracy of the approximation. Our algorithm compresses the representation of the initial shape contour to a sparse sequence of points in the plane defining the vertices of the shape's polygonal approximation. Furthermore, it is possible to control the overall resolution of the approximation by a single, scale- independent parameter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The conceptual design of many procedures used in image analysis starts with models which assume as an input sets in Euclidean space which we regard as real objects. However, the application finally requires that the Euclidean (real) objects have to be modelled by digital sets, i.e. they are approximated by their corresponding digitizations. Also 'continuous' operations (for example integrations or differentiations) are replaced by 'discrete' counterparts (for example summations or differences) by assuming that such a replacement has only a minor impact on the accuracy or efficiency of the implemented procedure. This paper discusses applications of results in number theory with respect to error estimations, accuracy evaluations, correctness proofs etc. for image analysis procedures. Knowledge about digitization errors or approximation errors may help to suggest ways how they can be kept under required limits. Until now have been only minor impacts of image analysis on developments in number theory, by defining new problems, or by specifying ways how existing results may be discussed in the context of image analysis. There might be a more fruitful exchange between both disciplines in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel and efficient invertible transform for shape segmentation is defined that serves to localize and extract shape characteristics. This transform -- the chordal axis transform (CAT) -- remedies the deficiencies of the well-known medial axis transform (MAT). The CAT is applicable to shapes with discretized boundaries without restriction on the sparsity or regularity of the discretization. Using Delaunay triangulations of shape interiors, the CAT induces structural segmentation of shapes into limb and torso chain complexes of triangles. This enables the localization, extraction, and characterization of the morphological features of shapes. It also yields a pruning scheme for excising morphologically insignificant features and simplifying shape boundaries and descriptions. Furthermore, it enables the explicit characterization and exhaustive enumeration of primary, semantically salient, shape features. Finally, a process to characterize and represent a shape in terms of its morphological features is presented. This results in the migration of a shape from its affine description to an invariant, and semantically salient feature-based representation in the form of attributed planar graphs. The research described here is part of a larger effort aimed at automating image understanding and computer vision tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a syntactic and metric two-dimensional shape recognition scheme based on shape features. The principal features of a shape can be extracted and semantically labeled by means of the chordal axis transform (CAT), with the resulting generic features, namely torsos and limbs, forming the primitive segmented features of the shape. We introduce a context-free universal language for representing all connected planar shapes in terms of their external features, based on a finite alphabet of generic shape feature primitives. Shape exteriors are then syntactically represented as strings in this language. Although this representation of shapes is not complete, in that it only describes their external features, it effectively captures shape embeddings, which are important properties of shapes for purposes of recognition. The elements of the syntactic strings are associated with attribute feature vectors that capture the metrical attributes of the corresponding features. We outline a hierarchical shape recognition scheme, wherein the syntactical representation of shapes may be 'telescoped' to yield a coarser or finer description for hierarchical comparison and matching. We finally extend the syntactic representation and recognition to completely represent all planar shapes, albeit without a generative context-free grammar for this extension.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The 'space of circles' (Gamma) 2 is a representation space in which circles are considered as points. This paper first presents the main properties of this space, with special attention paid to its rich pseudo-euclidean structure, equivalent to duality with respect to a certain paraboloid. The interest of (Gamma) 2 for image processing is demonstrated by two applications regarding a finite set of points in the plane: its associated Voronoi tessellation, and a second one which deals with an efficient description of the circumscribed circle (minimal size surrounding circle) of the given set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The normal distribution or Gaussian distribution is one of the greatest tools in the computer vision and the other fields. For instance, it is used as the preprocessing tool for the subsequent operations in the computer vision, especially, in order to reduce the noises. In another case, it is used as the functional bases to approximate the given function which is defined as the set of the sampled points. This is called the radial bases function method (RBF). The RBF method is also used in the computational neuroscience, to explain the human mental rotation. From the information theoretic view, it gives the maximum value of the Boltzmann-Shannon entropy. In this paper, we give yet another such distribution, which is called q-normal distribution. We also give many useful formulae to use q-normal distribution in the various fields.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we discuss an alternative approach to optimal and near optimal tracking and estimation of rotations with applications in vision systems. The technique used here will be that of Geometric Algebra (GA) which provides a very elegant and efficient framework for dealing with geometric entities and transformations. It is coordinate-free and has associated with it a well-developed calculus and multi-linear algebra. Much of the power of GA lies in the way it represents and interprets rotations. Estimation and tracking of geometric entities are important in many fields -- e.g. 3D object tracking, multi-camera systems, space and terrestrial navigation etc., and to date Kalman filter techniques have provided a fast and efficient means of solving such problems. We will show how Kalman filters with conventional type state- spaces are derived in a GA setting and then extend these ideas to deal with a state-space consisting of rotors. Taking advantage of the fact that we are able to write down a minimally parameterized cost function and differentiate with respect to rotors in the GA framework, a solution to the tracking and optimal orientation estimation problem can be obtained efficiently. The resulting algorithm is applied to a variety of real and simulated data using articulated models. In particular, we will look at tracking human motion data from an optical motion capture unit. The extraction, interpolation, modification and classification of the underlying motion is also discussed with reference to future research directions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two image of a single scene are related by the epipolar geometry, and its determination is very important in many applications such as scene modeling and navigation. A new robust linear algorithm by exploiting the strategy of weighted normalization for matching points and the new uncertainty analysis for the fundamental matrix present in this paper. Firstly, on the basis of considering the residual errors and epipolar distance, the weighted factor and the cost function are introduced. Secondly, the epipolar geometry is determined by a simple transformation (weighted translation and scaling) for the matching points. The supplemental computation in the method for the transformation is insignificant. Finally, a large number of real images and simulated data were selected and an intensive experimental work has been carried out to quantify the uncertainty of the fundamental matrix by using the presented method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an approach to calibrating a fish-eye lens camera with high accuracy for stereo vision to recover dense depth image. The camera model accounts for the fish-eye deformation and major sources of camera distortion including radial distortion, decentering distortion and thin prism distortion. We use nonlinear least-squares techniques to solve for the camera parameters that best project three-dimensional points of a calibration pattern onto intensity edges in an image of this pattern, without explicitly extracting the features such as corners, edges or points. The effectiveness of the proposed calibration method has been proved by experimentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method for measuring the object independent irradiance function for combined Gray-code and phase-shift 3-D measurement systems is presented. The irradiance functions of the projected sinusoidal patterns are measured with high resolution in the phase space without leaving the signal chain of the measurement system. This preserves the system specific distortions of the projected intensity functions. The system specific irradiance functions can be used to compute the object phase numerically. It can be shown that the systematic error of phase shift measurement caused by non sinusoidal phase-shift patterns and phase shifter mis-calibration can be reduced significantly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a method to simplify the structure of the surface skeleton of a 3D object, such that loss of information can be kept under control. Our approach is to prune surface border jaggedness by removing peripheral curves. The surface border is detected and all curves belonging to it are identified. Then, distance information is used to distinguish the short curves, whose voxels are possibly deleted, provided that the topology is not changed. Our method is simple, fast, and can be applied also to two-voxel thick surface skeletons. It prunes only curves which correspond to minor features of the object, without shortening the remaining more significant curves. The structure of the surface skeleton becomes significantly simplified. The simplified set can be used directly for shape representation, or as input to curve skeleton computation. If we extract the curve skeleton from the simplified set, its structure is more manageable than if the curve skeleton is obtained from the non-simplified set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.