PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The interpretation of standard 2D X-ray images by humans is often very difficult due to the lack of visual cues to depth in an image produced by transmitted radiation. The 3D Imaging Group has previously developed stereoscopic X-ray systems providing binocular parallax as a depth cue to aid images interpretation. The stereoscopic images produced have proven suitable for human viewing and allow the observer to determine the relative position of objects within the scene under consideration. Such additional information is useful for scene interpretation and understanding. The binocular parallax introduced into X-ray images can be utilized in a similar way to television type stereoscopic systems where the disparity is used to determine the range of objects within the scene. This range information can be used in a number of ways, for instance co-ordinate measurement. Current research at Nottingham has concentrated on grouping object points of similar depth and producing a series of contiguous slices through the scene of interest. The purpose of producing this new data base is to combine this and existing reconstruction software used in CAT scanning techniques to provide a 21/2D visualization of the observed scene or object. This representation of the scene is intended to introduce an alternative view to the observer, further enhancing their interpretation ability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a system called Q-PIT. It is a prototype example of a class of systems we define as populated information terrains. Within the scope of our work, we examine issues from multi-user virtual reality, data visualization and the use of databases to support cooperative work. We describe the use of Q-PIT as an information terrain, show how such a terrain can be generated, explored and manipulated, before considering the issues of populating such a terrain with more than one user.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports a novel applications of 3D visualization in an ARPA-funded remote radiation treatment planning (RTP) experiment, utilizing supercomputer 3D volumetric modeling power and NASA ACTS (Advanced Communication Technology Satellite) communication bandwidths at the Ka-band range. The objective of radiation treatment is to deliver a tumorcidal dose of radiation to a tumor volume while minimizing doses to surrounding normal tissues. High performance graphics computers are required to allow physicians to view a 3D anatomy, specify proposed radiation beams, and evaluate the dose distribution around the tumor. Supercomputing power is needed to compute and even optimize dose distribution according to pre-specified requirements. High speed communications offer possibilities for sharing scarce and expensive computing resources (e.g., hardware, software, personnel, etc.) as well as medical expertise for 3D treatment planning among hospitals. This paper provides initial technical insights into the feasibility of such resource sharing. The overall deployment of the RTP experiment, visualization procedures, and parallel volume rendering in support of remote interactive 3D volume visualization will be described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interactive visualization is complicated by the complexity of the objects being visualized. Sampled or computed scientific data is often dense, in order to capture high frequency components in measured data or to accurately model a physical process. Common visualization techniques such as isosurfacing on such large meshes generate more geometric primitives than can be rendered in an interactive environment. Geometric mesh reduction techniques have been developed in order to reduce the size of a mesh with little compromise in image quality. Similar techniques have been used for functional surfaces (terrain maps) which take advantage of the planar projection. We extend these methods to arbitrary surfaces in 3D and to any number of variables defined over the mesh by developing a algorithm for mapping from a surface mesh to a reduced representation and measuring the introduced error in both the geometry and the multivariate data. Furthermore, through error propagation, our algorithm ensures that the errors in both the geometric representation and multivariate data do not exceed a user-specified upper bound.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hierarchical decomposition of data using Haar and Legendre scaling functions as well as multiresolution compression and decomposition of data using hyperbolic 3D Haar wavelets, Battle-Lemarie wavelets, and biorthogonal wavelets have been used in the past to visually explore large volumetric data sets. In this work we explore the use of Legendre wavelets for efficient volumetric compression and rendering of data. There are several advantages of using Legendre wavelets. First, by using wavelets rather than scaling functions, we gain the advantages associated with multiresolution decomposition of data. This includes efficient exploration of data at different levels of detail and advantages of incremental rendering of data and progressive transmission. Second, the main advantage of these wavelets over other wavelet models arises from the fact that they do not overlap and therefore require filters of only unit length. In contrast, Battle-Lemarie wavelets require filters of infinite length. Similarly, if B-spline wavelets are used, they will require filters of infinite length as well. Biorthogonal wavelets require filters of finite length; however Legendre wavelets use filters of only unit length. This results in relatively simple and efficient computation. We use coherent projection method and L2 error criterion to compress and render the volumetric data, although the model is flexible to accommodate other volumetric rendering techniques and other error criteria. The Legendre wavelet model for volumetric data compression and rendering has been implemented. The system has been used for visual data exploration of several large volumetric data sets. Detailed statistical measures of compression ratios, rendering time, and associated errors have been derived for different threshold values of many volumetric data sets. Although for lossless compression Legendre wavelet model requires much more time and space, it clearly outperforms Haar wavelet model in compression and image quality for lossy compression with very small L2 errors. This characteristic is very helpful in visual exploration of data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A common task in data analysis is to compare two or more sets of data, statistics, presentations, etc. A predominant method in use is side-by-side visual comparison of images. While straightforward, it burdens the user with the task of discerning the differences between the two images. The user if further taxed when the images are of 3D scenes. This paper presents several methods for analyzing the extent, magnitude, and manner in which surfaces in 3D differ in their attributes. The surface geometry are assumed to be identical and only the surface attributes (color, texture, etc.) are variable. As a case in point, we examine the differences obtained when a 3D scene is rendered progressively using radiosity with different form factor calculation methods. The comparison methods include extensions of simple methods such as mapping difference information to color or transparency, and more recent methods including the use of surface texture, perturbation, and adaptive placements of error glyphs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a semi-automated building assessment method (SABAM) for estimating building edges with sub-pixel accuracy. The semi-automated approach is based on an earlier manual point method which determined building height using shadow length analysis. The manual method was then semi-automated using a sub-pixel edge detection algorithm to obtain more precise building edges and reduce human interpretation. Edge locations have been evaluated to within 1/100th of a pixel using gradient descent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In industrial applications, thermal infrared radiance mainly corresponds to the surface temperature of the target being studied. Since the contrast of an infrared image is usually much lower than that of a visible image, pseudocoloring techniques are usually used in real- time thermal image display. Pseudocoloring schemes for infrared video should be evaluated according to its ease with which the temperature variance can be distinguished intuitively. In this paper, two uniform brightness schemes for thermal infrared image pseudocoloring are presented. One of the new schemes is derived from the well-known Bezier/Bernstein blending function, and the other one is based on the popular HSI color model. They are compared with a conventional pseudocoloring scheme and a modified conventional scheme. Some representative experimental results are illustrated to show the unique features of the new pseudocoloring schemes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe techniques for adaptive nonverbal visual querying of large databases of images. The technique facilitates (a) visual mapping, a technique visualizing the relationships among the images, revealed by plotting each image as a point in a multidimensional `feature space,' and (b) interactive selection of features to maximize the correspondence between the clusters in feature space and the user's understanding of the relationships among the stored images. We refer to these techniques of querying as Adaptive visual querying. Adaptive visual querying will facilitate browsing and searching image databases from examples of images and from computer-aided sketches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an `interaction interface' for visual database exploration. Visualization has been traditionally thought of as an output technology: this research places visualization into a broader context and aims to develop an input model for the visual exploration of databases. We first describe the data infrastructure of an integrated database-visualization system. We then extend the definition of visualization to include the data interactions allowed over the visualized image. We finally present the portion of this system that describes how interactions over data visualizations are mapped to the targets of the visual interaction: the various data objects in the system, or the database itself. In this way, the user is brought closer to the data because interaction is over a visualization, which is perceived by the user, and the correct effect of the interaction is automatically mapped to the appropriate underlying data object. We build on a fundamental taxonomy of empirically-developed data interaction, and use these interaction specifications in our object-oriented design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a user interface, CANDID Camera, for image retrieval using query- by-example technology. Included in the interface are several new layout algorithms based on multidimensional scaling techniques that visually display global and local relationships between images within a large image database. We use the CANDID project algorithms to create signatures of the images, and then measure the dissimilarity between the signatures. The layout algorithms are of two types. The first are those that project the all-pairs dissimilarities to two dimensions, presenting a many-to-many relationship for a global view of the entire database. The second are those that relate a query image to a small set of matched images for a one-to-many relationship that provides a local inspection of the image relationships. Both types are based on well-known multidimensional scaling techniques that have been modified and used together for efficiency and effectiveness. They include nonlinear projection and classical projection. The global maps are hybrid algorithms using classical projection together with nonlinear projection. We have developed several one-to-many layouts based on a radial layout, also using modified nonlinear and classical projection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image compression techniques based on wavelet and fractal coding have been recognized significantly useful in image texture classification and discrimination. In fractal coding approach, each image is represented by a set of self-transformations through which an approximation of the original image can be reconstructed. These transformations of images can be utilized to distinguish images. The fractal coding technique can be extended to effectively determine the similarity between images. We introduce a joint fractal coding technique, applicable to pairs of images, which can be used to determine the degree of their similarity. Our experimental results demonstrate that fractal code approach is effective for content-based image retrieval. In wavelet transform approach, the wavelet transform decorrelates the image data into frequency domain. Feature vectors of images can be constructed from wavelet transformations, which can also be utilized to distinguish images through measuring distances between feature vectors. Our experiments indicate that this approach is also effective on content-based similarity comparison between images. More specifically, we observe that wavelets transform approach performs more effective on content- based similarity comparison on those images which contain strong texture features, where fractal coding approach performs relatively more uniformly well for various type of images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Natural resources management typically requires prediction of environmental changes over long time periods. In the case of forest management, for example, decisions can affect timber production, water catchment properties, recreational values, aesthetic values, energy usage or employment opportunities. This paper presents an application of advanced visualization techniques in combination with a geographic information system and linear programming in this context. The emphasis is on provision of visual feedback on the outcome of decision options. This main interactive window include a 3D view of the management area based initially on remote sensing imagery draped on a digital terrain model. Also on screen are a slider for time (from the present to 200 years hence), and sliders for decision variables such as required job support level, extent of habit conservation or catchment performance. As the time, or the decision variables are altered by the user the result is presented through replacement of textures in the 3D view to represent the changes in land cover. Initially the visualization is based on prior modeling in a well-defined decision space. The system reads model output in ARC/INFO export format while interactive visualization is based on the Silicon Graphics Performer Toolkit.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Debugging concurrent systems has been shown to be much more complex than for serial systems; this is further complicated by the number of processors that may be involved in any operation. The correctness of such systems are equally as important as serial systems. This places a considerable amount of extra demand on the debugging environment. Additional capability must be provided with the debugging environment to offset the complexity. We describe work related to the visualization of data associated with concurrent systems to aid users in comprehending the operation and correctness of their concurrent applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Distortion oriented displays (DOD) are an interface approach for supporting navigation through large visual datasets (maps) without losing context. The traditional approaches of windowing and zooming lead the user to lose context within the overall map. DOD present the user with a movable virtual magnifying glass within which a detailed view of the point of focus is presented. Surrounding this the rest of the map is presented in a visually compressed view to ensure context is retained. An important feature of a DOD is that the user should be able to move the point of focus around the screen and experience no discernible delay in the redisplay of the map. Therefore the computation overhead is very important when considering the implementation of a DOD. This paper describes FRUSTUM, a novel form of DOD with low apparent distortion and minimal computational overhead. Experience with the FRUSTUM display has indicated that considerably higher magnification factors are possible than the generally accepted maximum for previously described DOD.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Regional difference of the society creates movement of people across regions. Conventionally, an interregional migration data set is stored as an origin-destination (O-D) matrix, which has one row for each flow origin and one column for each destination. For example, interregional migration data within fifty states of United States generates a 50 by 50 O-D matrix, and an O- D matrix for county to county migration would have more than 3100 rows and columns. Studying these complex migration flow systems has always been challenging to scientists. This paper reports a methodology that combines scientific visualization, exploratory data analysis, dynamic graphics and projection pursuit methods to explore these migration flow systems. First, a complex migration flow data set is simplified by using a projection pursuit method. These less complex data are then represented in four graphic views: a migration flow view showing direction and magnitude of migration, a choropleth view showing characteristics of O-D regions, a statistical view of flow variables, and a statistical view of O-D attribute variables. These four views are linked by using a dynamic brushing technique, which enables the researcher to explore the relationships between the four views. These relationships can then be used as the basis for understanding the migration flow system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a visualization tool which is a component of the NYNEX Network Exploratorium, a decision support platform for telephone interoffice network planning. The visualization tool provides a number of visualization styles suited to knowledge discovery in the network planning domain, and is designed for expansion to support additional styles. Visualizations are specific using a declarative language, which binds visual attributes directly to arbitrary attributes of objects in an object-oriented database. An overview of the language and some application examples are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reviews the results of multidimensional image analysis and visualization studies using n-dimensional Probability Density Functions (nPDF) algorithm. The nPDF technique is an approach to the visualization and analysis of multispectral data and overcomes many of the problems inherent to traditional classifiers that rely on purely statistical approaches to describe data and class (or training field) distribution. A graphical method, in conjunction with statistical techniques, has the advantage of providing a multidimensional data distribution and may be used for supervised and unsupervised classifications. The approach is particularly useful for comparing training data with the spectral classes present in the entire data set. Compared to the conventional statistical classifiers, the nPDF procedure is extremely fast and user-interactive. The approach relies on data visualization techniques and displays data and class distribution graphically. In this paper, a review of the theory and applications of the technique are given. The data processing procedure for supervised and unsupervised classifications using the interactive nPDF method and comparison of the nPDF technique with the traditional algorithms are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a software platform in which large DNA sequence datasets may be visualized by techniques which readily reveal patterns and insights. Initially we have focused on providing accurate statistical visualizations rather than qualitative presentations. The first application of this platform visualizes properties of DNA sequence strings of any size as a function of string position (for example, in a large chromosome). We provide an example in which we visualize the ratio of found to expected frequency of occurrence for specific sequence strings (AAAA and TTTT) and show these reveal interesting patterns in that DNA string (yeast chromosome III). For flexibility, any new function, calculated from the sequence string, may be added to the software platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The X Public Access (XPA) mechanism allows an Xt program to define named public access points through which data and commands can be exchanged with other programs. We will discuss our design goals for XPA, the technical challenges we faced--including extensions to the Xt selection implementation--and the user interface and application programming interface that we developed to meet these challenges. We also will describe our application of XPA to a new version of the popular SAOimage astronomical image display program. XPA makes possible external control of the program's main function, including image display, image zoom and pan, colormap manipulation, cursor/region definition, and frame selection. It also supports `public access' to internal algorithms such as image file access and scaling. Finally, we will describe how XPA is used to support user-configurable analysis of image data and bi- directional communication with other processes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of the Virtual Prototyping Environment (VPE) is to decrease product development time and costs and to increase quality and flexibility by providing continuous computer support for the development cycle. Virtual prototype are directly derived from integrated CAD- Systems and enriched with simulation data. In addition, the VPE supports cooperative teams by providing different Computer Supported Cooperative Work (CSCW) techniques of shared viewing environments in combination with further communication tools for long distance collaboration. One aspect of CSCW in VPE's is multi-user and multi-application Shared 3D Environments. In the shared environment model, 3D objects from different applications can be joined into one scene that can be viewed by different users with independent or shared camera positions, enabling the distribution of visualization tasks between smaller, flexible and more specialized applications. The underlying product model of the VPE is based on the STEP standard applied to a distributed object-oriented data management approach. Taking the requirements from CE and shared 3D environments into account, we enriched the application interface to the data management with high level concepts such as object versioning, migration and consistency management, and extended the product data model to include presentation and annotation aspects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In modern object-oriented computer systems the internal state of the entire system consists of the internal states of many objects, probably distributed over a heterogeneous network of computers. The man machine interaction in such an application is based on the visualization of states on one hand and the modification of states in combination with event generation on the other hand. This paper describes concept and realization of a reusable service for general man machine communication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A generic interface environment for users seeking to interact with and manipulate complex data sets is described. The design is based on the paradigm of a layered 3D visual environment which depicts a current context within the data set, using the notions of `above', `below', `beside', and `beyond'. This environment facilitates user exploration of the data, by selecting operations available in the current context, or by invoking `specialization' or `generalization' transitions to another context in the adjacent layers. The environment has been implemented with user and administrator modes of operation. Two examples of use of the environment are given for typical applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current research efforts involve using a supercomputing-visualization facility for climate and weather simulations and data visualization applications. The Numerical Models of the National Center for Atmospheric Research, the Community Climate Model version 2 (CCM2) and the Mesoscale Model version 5 (MM5), are currently running on the VPX240 and VPP500 supercomputers and are generating climate and weather data. Workstations and microcomputers connected to the supercomputers have been used to visualize the data. AVS macro-module networks and AVS/Express projects were developed on a SGI workstation to study climatological and meteorological parameters. The specific visualization topics worked on are: comparisons of AVS and AVS/Express visualization systems; comparisons of iso- surface and iso-volume techniques; and a multivariate data visualization system using multi- windows and a composite of data fields. A weather simulation case has been studied by using the multivariate data visualization system and visualization techniques. The results show that common weather features such as fronts, mesoscale high pressure systems, and clouds can be identified easily. Data dissemination and visualization using internet browsers have been successfully conducted across the Pacific Ocean.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The flow dynamics of the ventricular system in the brain are poorly understood. Invasive monitoring using radioisotope tracers or contrast media allows sampling of only one spatial location over time. Using non-invasive magnetic resonance imaging (MRI) makes it difficult to obtain a coherent 3D dataset having adequate spatial and temporal resolution. In order to increase our understanding of cerebrospinal fluid flow, and brain motion in vivo, a 3D geometric model was constructed by segmentation of coronal cross sections of the ventricular space. The initial model was constructed with some modifications in the geometry in order to simplify the flow field. The third ventricle became an unlimited reservoir of constant pressure. The simulation deals with pulsatile flow across a free surface boundary. The ependyma ventricular lining can be considered as an elastic membrane that deforms the ventricular space at a rate linked to the cardiac cycle. Consequently, one is dealing with a transient dynamic analysis whose displacements and velocities can be approximated by morphological changes during time-gated MRI sequences taken in axial, coronal, and sagittal planes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ligand-receptor protein binding is an important process in drug design. This paper discusses the development of a visual tool for studying the binding of ligand-receptor pairs, to help in identifying active sites of the molecules. The tool can be used to explore many possible variations in binding pairs without doing expensive laboratory experiments. The traditional view of the binding process has been limited to a static lock and key model. It is now regarded that the ligand and receptor change dynamically during the interaction. Traditional experimental methods only determine the shape of static chemical structures. Our tool improves on the previous methods by dynamically simulating the entire binding process of isolated ligand-receptor pairs. The open design of our dynamic interaction model allows its extension with further constraints and heuristic rules. This is needed when the existing forces do not provide a sufficiently complete system description. For example, more detailed simulation constraints can increase the probability of convergence of a ligand-receptor pair. New constraints can limit the degree of freedom of rotation about bonds, to take into account molecular affinity. The intermolecular rules may be changed to include the effects of hydrogen bonding, and other forces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, an integrated program development environment for computer vision tasks is presented. The first component of the system is concerned with the visualization of 2D image data. This is done in an object oriented manner. Programming of the visualization process is achieved by arranging the representations of iconic data in an interactively customizable hierarchy that establishes an intuitive flow of messages between data representations seen as objects. The visualization objects, called displays, are designed for different levels of abstraction, starting from direct iconic representation down to numerical features, depending on the information needed. Two types of messages are passed between these displays (update and result messages) which yield a clear and intuitive semantics. The second component of the system is an interactive tool for rapid program development. It helps the user in selecting appropriate operators in many ways. For example, the system provides context sensitive selection of possible alternative operators, as well as suitable successors and required predecessors. For the task of choosing appropriate parameters several alternatives exist. For example, the system provides default values, as well as lists of useful values for all parameters of each operator. To achieve this, a knowledge base containing facts about the operators and their parameters is used. Secondly, through the tight coupling of the two system components, parameters can be determined quickly by data exploration within the visualization component.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Generation of animation sequence of a deformed volumetric object on networked workstations is discussed. We use several deformation types: space mapping controlled by points linked to features of an object, deformation with an algebraic sum and metamorphosis. These deformations are directly applied to interpolated volume data with following polygonization of an isosurface for visualization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Painting method for interacting motion of multiple bodies is described. It is based on the method to paint the moving picture of a single body which is either rigid or flexible. The moving picture of each body is painted concurrently. Affine transformation and motion vector are used to paint the moving picture. The system interpolates the parameters given by the painter along the time passage. In order to paint the moving picture of a flexible body, the painter defines the region which he wants to move on the still picture. It means that geometrical space is divided. To paint the moving picture with interactive motion, the time interval of the moving picture is divided into the period of motion with interaction and that of motion without interaction. The moving picture is painted in a set of time periods. Thus, both geometrical space and time period are divided, and picture painting proceeds sequentially in both fields.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe a method of 3D modeling based on photographs for real-time graphics system of educational use. The method uses few basic models like squares, spheres and so on, and a 3D model is constructed by modifying basic models, guided by parameters. For example, we made an educational real-time graphics system of the deep space, having galaxies' 3D models. A typical galaxy called spiral galaxy consists of two parts; a spherical center part named bulge, and a whirlpool convex-lens shaped surrounding part named galactic disc. Galaxies' photographs are taken from a limited angle, because they are too far away, and viewed only from the earth. So a galaxies' photograph is whether in a whirlpool form, a convex-lens form, or in a slant form between the two forms. Therefore our method puts a sphere model at the bulge position, and a convex-lens model formed by a sphere metamorphism at the galactic disc position. Parameters are used to change galaxies' position, size, XYZ-axes metamorphism and rotation. Thus we get a 3D galaxy model corresponding to the photograph, and learners can look at a 3D galaxy from any viewpoint and view direction. In this way, we construct realistic 3D models. The amount of rendering computation is still low. Thus real-time rendering images are produced freely from a moving viewpoint and view direction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We wish to walk into a photograph just as Alice walked into the looking glass. From a mathematical perspective, this problem is exceedingly ill-posed (e.g. Is that a large, distant object or a small, nearby object?). A human expert can supply a large amount of a priori information that can function as mathematical constraints. The constrained problem can then be attacked with photogrammetry to obtain a great deal of quantitative information which is otherwise only qualitatively apparent. The user determines whether the object to be analyzed contains two or three vanishing points, then selects an appropriate number of points from the photon to enable the code to compute the locations of the vanishing points. Using this information and the standard photogrammetric geometric algorithms, the location of the camera, relative to the structure, is determined. The user must also enter information regarding an absolute sense of scale. As the vectors from the camera to the various points chosen from the photograph are determined, the vector components (coordinates) are handed to a virtual reality software package. Once the objects are entered, the appropriate surfaces of the 3D object are `wallpapered' with the surface from the photograph. The user is then able to move through the virtual scene. A video will demonstrate our work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dynamic characteristics of occlusion during lower jaw motion are useful in the diagnosis of jaw articulation problems and in computer-aided design/ manufacture of teeth restorations. The Functionally Generated Path (FGP), produced as a surface which envelops the actual occlusal surface of the moving opponent jaw, can be used for compact representation of dynamic occlusal relations. In traditional dentistry FGP is recorded as a bite impression in a patient's mouth. We propose an efficient computerized technique for FGP reconstruction and validate it through implementation and testing. The distance maps between occlusal surfaces of jaws, calculated for multiple projection directions and accumulated for mandibular motion, provide information for FGP computation. Rasterizing graphics hardware is used for fast calculation of the distance maps. Real-world data are used: the scanned shape of teeth and the measured motion of the lower jaw. We show applications of FGP to analysis of the occlusion relations and occlusal surface design for restorations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Full field surface data of cylindrically shaped objects, such as a human's head, can be quickly achieved by rotating a laser scanner and imaging system about the subject. B-spline surfaces can then be fitted to the measurements for data reduction and compatibility with NURBS based CAD systems. Techniques available for surface fit are subject to user input. Parameters, such as, the number of control points, tension (in the sense of a thin plate spline), etc. must be chosen to achieve optimal fit. This paper will discuss optimal choice of surface fitting parameters for human head scan data. Techniques for determining these optimal parameters, should be a benefit to other researchers with specifically shaped data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a simple and efficient algorithm for reconstructing 3D objects from multiple sithouettes ofthe object,
taken from arbitrary views. We classify the silhouette to lie in a face view, an edge view, or, otherwise, a general view. Our
approach is based on maintaining a single octree, which is trimmed to fit the object as the sithouettes are being processed.
We generate a locational table for each ofthe three views. These tables are fixed and are used by the algorithm to
simp1iii processing. The algoiithm deals with the silhouettes one at a time. The image is recursively decomposed into regions
until a region is either outside or inside the object. Regions outside the object are processed by deleting all corresponding
nodes, using the tables, from the octree.
To evaluate the perfonnance of our algorithm, we examine three criteria: time complexity, space complexity and
generality. We examined each criterion both analytically and experimentally. Our algorithm has a superior time and space
complexity. Also, our approach is general and is not limited to a specific set of views and is very accurate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the issue ofoctree representation There currently exists a lot of different octree representation
methods. We propose a new representation method, the GrayCode. The representation can be used to represent both Octrees
and Quadtrees. The GrayCode method stores the octree as a group ofseveral lists, each list representing one level of the
octree. In each list we store only non-terminal (Gray) nodes. Each node record contains the node's locational code, and 8
fields containing the average color ofeach of its eight sons. The locational code specifies both the exact location ofthe octant
in space and the size ofthe octant.
To evaluate the performance of our representation method, we examine our representation based on four criteria:
the ability to store a large amount ofdata, the ability to skip detail, compactness, and ease ofprocessing. We experimented
with a set of 1 3 sample objects in random orientations. By using analysis and experiments with the random objects, we
proved the GrayCode has a veiy good performance in all four criteria. We conclude that the GrayCode has the best overall
performance in the octree requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Texture plays an important role in image analysis and understanding, with many applications in medical imaging and computer vision. However, analysis of texture by image processing is a rather difficult issue, with most techniques being oriented towards statistical analysis which may not have readily comprehensible perceptual correlates. We propose new methods for auditory display (AD) and sonification of (quasi-) periodic texture (where a basic texture element or `texton' is repeated over the image field) and random texture (which could be modeled as filtered or `spot' noise). Although the AD designed is not intended to be speech- like or musical, we draw analogies between the two types of texture mentioned above and voiced/unvoiced speech, and design a sonification algorithm which incorporates physical and perceptual concepts of texture and speech. More specifically, we present a method for AD of texture where the projections of the image at various angles (Radon transforms or integrals) are mapped to audible signals and played in sequence. In the case of random texture, the spectral envelopes of the projections are related to the filter spot characteristics, and convey the essential information for texture discrimination. In the case of periodic texture, the AD provides timber and pitch related to the texton and periodicity. In another procedure for sonification of periodic texture, we propose to first deconvolve the image using cepstral analysis to extract information about the texton and horizontal and vertical periodicities. The projections of individual textons at various angles are used to create a voiced-speech-like signal with each projection mapped to a basic wavelet, the horizontal period to pitch, and the vertical period to rhythm on a longer time scale. The sound pattern then consists of a serial, melody-like sonification of the patterns for each projection. We believe that our approaches provide the much-desired `natural' connection between the image data and the sounds generated. We have evaluated the sonification techniques with a number of synthetic textures. The sound patterns created have demonstrated the potential of the methods in distinguishing between different types of texture. We are investigating the application of these techniques to auditory analysis of texture in medical images such as magnetic resonance images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.