PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Looking back on the history of visualization, as the graphical representation of the results of scientific computing, we see that visualization has been defined primarily by the technology of computing and displays. The 'ancient' history of visualization is that before the publication of the NSF Panel's report Visualization and Scientific Computing in 1987. After a look at this history and its technologies, and limitations imposed by those technologies and costs, we review the objectives set out in the panel report and the state of visualization today, especially its limitations and problems. Cost of computer memory was the primary inhibitor of visualization in the past. Today the bottleneck on visualization comes primarily from limitations in network bandwidth. Looking at trends in today's technology, as well as trends and opportunities for visualization in scientific applications, we suggest potential developments in visualization by the end of the century.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent efforts in visualization have concentrated on high volume data sets from numerical simulations and medical imaging. There is another large class of data, characterized by their spatial sparsity with noisy and possibly missing data points, that also need to be visualized. Two places where these type of data sets can be found are in oceanographic and atmospheric science studies. In such cases, it is not uncommon to have on the order on one percent of sampled data available within a space volume. Techniques that attempt to deal with the problem of filling in the holes range in complexity from simple linear interpolation to more sophisticated multiquadric and optimal interpolation techniques. These techniques will generally produce results that do not fully agree with each other. To avoid misleading the users, it is important to highlight these differences and make sure the users are aware of the idiosyncrasies of the different methods. This paper compares some of these interpolation techniques on sparse data sets and also discusses how other parameters such as confidence levels and drop-off rates may be incorporated into the visual display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Oceanographic Visualization Interactive Research Tool (OVIRT) was developed to explore the utility of scalar field volume rendering in visualizing environmental ocean data and to extend some of the classical 2D oceanographic displays into a 3D visualization environment. It has five major visualization tools: cutting planes, minicubes, isosurfaces, sonic-surfaces, and direct volume rendering. The cutting planes tool provides three orthogonal cutting planes which can be interactively moved through the volume. The minicubes routine renders small cubes whose faces are shaded according to function value. The isosurface tool is conceptually similar to the well-known marching cubes technique. The sonic surfaces are an extension of the 2D surfaces which have been classically used to display acoustic propagation paths and inflection lines within the ocean. The surfaces delineate the extent and axis of the shallow and deep sound channels. The direct volume rendering (DVR) techniques give a global view of the data. Other features include the ability to overlay the shoreline and inlay the bathymetry. There are multiple colormaps, an automatic histogramming feature, a macro and scripting capability, a picking function, and the ability to display animations of DVR imagery. There is a network feature to allow computationally expensive functions to be executed on remote machines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Ocean contains complex physical phenomena that evolve over both space and time and exhibit many dynamic mesoscale features, like eddies and fronts. Studying the evolution, deformation, and interaction of these dynamic features and accurately modeling them is the essence of research in ocean circulation. Scientific visualization provides an effective means for scientists to study the evolution and interaction of oceanographic features in large time-varying, multiparameter data sets. One of the goals of oceanographic visualization is to build and validate the mathematical models. This generally necessitates an accurate tracking and therefore numerical description of those ocean features over space and time. Many visualization techniques have been found effective in providing insight. However, a desirable capability for visualization of mesoscale features is automatically recognizing and tracking underlying features in oceanographic data sets. Important features that may not be anticipated in large data sets should be detected automatically during the visualization process. The existence and therefore the locations of features are often unknown even to a knowledgeable user at the beginning. Without an automatic mechanisms, intensive and time-consuming searches must be performed, otherwise the features may not be revealed. In order to best characterize the features, locally optimized data classification must be achieved at each time step, each depth level and for each parameter. This is a difficult, if not impossible, task with traditional data classification methods. In this paper, we present the work we have done in addressing the feature extraction problem in four dimensional oceanographic visualization. We developed feature tracking algorithms that exploited the features' temporal and spatial correlations and applied them to tracking eddies over space and time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper explores the visualization techniques, design objectives, implementation trade-offs, and results of creating visualization tools for evaluating data from a space based sensor which is estimated to produce 8 Gigabytes of data a day for approximately 20 months. Effectiveness of visualization tools is evaluated in terms of data accessibility and user control over visual information. Visual presentation is also evaluated as a contributing factor to the perceived effectiveness and success of these tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data analysis software can be built using two differing philosophies: 1) Develop custom applications to meet each need as it arises, or 2) Develop a general toolbox to interactively build a wide variety of applications. In support of SPIRIT III performance assessment1, we have developed a package using the latter philosophy. A database and an analysis toolbox are paired to provide a high degree of flexibility in data retrieval, analysis and visualization. This system is integrated into SGI Explorer, which allows scientists to interactively construct analysis networks. These networks perform database queries, then apply user-selected analysis and visualization tools to the query results. The Reporter package allows interactive design of reports with multiple plots, tables and text annotations per page. The core analysis module is the Mapper, which allows the interactive transformation of 2 or 3-dimensional data into N-dimensions. Mapper allows a rich variety of conelations and relationships in the data to be examined. Finally, custom analysis networks may be saved and distributed within the user community. This package has proven an effective support for the rapidly-changing needs of SPiRIT III analysis. Due to its general nature, it also has promise for broader uses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses an example of Synthetic Aperture Radar (SAR) data management and processing using standard open systems. The SAR Data Quality Analysis application includes SAR data decoding, visualization of whole image with the related characteristics, visualization of a zoomed portion of the image, impulse-response- function analysis of any pixel of the image and quality analysis report generation and printing. The SAR Data Quality Analysis application is built on an image processing library, TELIMAGO, resulted from a research and development program, that provides to the application developers a set of standard tools (including user-friendly interface objects, data input/output management, display management and data conversion tools), written in ANSI C language using OSF/Motif as graphical user interface, X Window System as graphic environment and UNIX as operating system. The SAR Data Quality Analysis application is essentially dedicated to SAR data users and investigators who want to decode ERS-1 data products, obtaining both image data and relevant ancillary data to be analyzed according to the SAR Quality Requirement Parameters, defined by European Space Agency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interactive stereoscopic images can be viewed on a graphics workstation by producing side-by-side images and viewing through a simple mirror device. However, it is important that the viewing device have pairs of adjustable nonparallel mirrors so large windows can be viewed without the human's sightlines needing to diverge, or 'look wall-eyed'. Transformations to produce the correct images for this viewing method are described. Previous work applied to the case where both left and right images were to be superimposed and multiplexed in the same region of the screen, often called anaglyphs. Such cases are adequately handled by a translation and an off-axis perspective transformation. The same kind of transformation can be used with a parallel-mirror device, but such devices have practical limitations. This paper shows that nonparallel mirrors require a somewhat more complicated transformation involving scene rotations as well. Derivation of the correct angle of rotation of the main difficulty in computing this transformation. The transformation can be implemented by a sequence of graphics library procedures. Advantages and disadvantages of nonparallel mirror methods are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe the process used in combining an existing computer simulation with Virtual Reality (VR) I/O devices and conventional visualization tools to make the simulation easier to use and the results easier to understand. VR input technology facilitates direct user manipulation of 3D simulation parameters. Commercially available visualization tools provide a flexible environment for representing abstract scientific data. VR output technology provides a more flexible and convincing way to view the visualization results than is afforded in contemporary visualization software. The desired goal of this process is a prototype system that minimized man-machine interface barriers, as wells as enhanced control over the simulation itself, so as to maximize the use of scientific judgement and intuition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of scientific visualization is to simplify the analysis of numerical data by rendering the information as an image. Even when the image is familiar, as in the case of terrain data, preconceptions about what the image should look like and deceptive image artifacts can create misconceptions about what information is actually contained in the scene. One way of aiding the development of unambiguous visualizations is to add stereoscopic depth to the image. Despite the recent proliferation of affordable stereoscopic viewing equipment, few researchers are at this time taking advantage of stereo in their visualizations. It is generally perceived that the rendering time will have to be doubled in order to generate the pair, and so stereoscopic viewing is sacrificed in the name of expedient rendering. We show that this perception is often invalid. The second half of a stereoscopic image can be generated from the first half for a fraction of the computational cost of complete rendering, usually no more than 50 percent of the cost and in many cases as little as 5 percent. Using the techniques presented here, the benefits of stereoscopy can be added to existing visualization systems for only a small cost over current single-frame rendering methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In designing a graphical user interface (G.U.I.) for our curvilinear volume renderer qp (quick projection), we made design decisions based upon experience in implementing several other visualization programs. As we gradually refined our vision of an appropriate interface, our programs have become more modular as well as easier to use, and often parts of the interface can be directly ported for use with other software. We present an overview of the interface with an explanation of design decisions. While we don't claim this is any sense the 'ultimate user interface', we hope this may help others avoid time- consuming experimentation in interface design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An essential part of visualization of massive time-dependent data sets is to identify, quantify and track important regions and structures (objects of interest). This is true for almost all disciplines since the crux of understanding the original simulation, experiment or observation is the study of the evolution of the 'objects' present. Some well known examples include tracking the progression of a storm, the motion and change of the 'ozone hole', or the movement of vortices shed by the meandering Gulf stream. In this paper, we describe work-in- progress on extracting and tracking three dimensional evolving objects in time dependent simulations. The simulations are from ongoing research in computational fluid dynamics (CFD), however, the tracking procedures are general and are appropriate for many other disciplines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The visualization of Computational Fluid Dynamics (CFD) simulations is typically a post process. The engineer runs a simulation and stores the results in a solution file. The solution is then visualized using a CFD visualization package. If a simulation is time varying, a solution must be stored for various simulation times. Time-varying solution files are often very large, ranging in size from megabytes to gigabytes depending on the spatial and temporal resolution of the solution. Disk space soon becomes a limiting factor for the resolution of a solution. One way to avoid the disk problem is to visualize the solution one time step at a time concurrent with the generation of the solution, discarding the solution and storing the visualization. This paper will discuss some of the key issues of concurrent visualization as well as an implementation of this technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unsteady 3-D computational fluid dynamics (CFD) results can be very large. Some recent solutions exceed 100 gigabytes. Visualization techniques that access the entire data set will, therefore, be excruciatingly slow. We show that particle tracing in vector fields calculated from disk resident solutions can be accomplished in O(number-of-particles) time, i.e., visualization time is independent of solution size. This is accomplished using memory mapped files to avoid unnecessary disk IO, and lazy evaluation of calculated vector fields to avoid unnecessary CPU operations. A C++ class hierarchy implements lazy evaluation such that visualization algorithms are unaware that the vector field is not stored in memory. A numerical experiment conducted on two solutions differing in size by a factor of 100 showed that particle tracing times varied by only 10-30 percent. Other visualization techniques that do not access the entire solution should also benefit from memory mapping and lazy evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The physical interpretation of turbulent flow characteristics continues to be a major obstacle in the understanding and modelling of turbulence effects in the field of fluid mechanics. Turbulence modeling plays a major role in the predictive capabilities in engineering applications. Development of new and improved models require better understanding of the mechanism associated with turbulence. Some of the requirements of improved data interpretation include a systematic approach to establishing the relationship among various turbulence quantities at many different scales of interaction. In this paper is discussed some avenues that are thought to be appropriate methods of examining high Reynolds number turbulence in a manner that allows illustration and interpretation of both large and small scale phenomena. The general principle is the requirement to examine data at different levels of abstraction based on the functional relationships that vary through a designated hyper-space. Illustration is made of the enstrophy distribution which is concentrated in regions of high wave numbers. Results are shown for a three dimensional turbulent channel flow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Grand tours are a class of methods for visualizing multivariate data, or any finite set of points in n-space. The idea is to create an animation of data projections by moving a 2-dimensional projection plane through n-space. The path of planes used in the animation is chosen so that it becomes dense, that is, it comes arbitrarily close to any plane. One inspiration for the grand tour was the experience of trying to comprehend an abstract sculpture in a museum. One tends to walk around the sculpture, viewing it from many different angles. A useful class of grand tours is based on the idea of continuously interpolating an infinite sequence of randomly chosen planes. Visiting randomly (more precisely: uniformly) distributed planes guarantees denseness of the interpolating path. In computer implementations, 2-dimensional orthogonal projections are specified by two 1- dimensional projections which map to the horizontal and vertical screen dimensions, respectively. Hence, a grand tour is specified by a path of pairs of orthonormal projection vectors. This paper describes an interpolation scheme for smoothly connecting two pairs of orthonormal vectors, and thus for constructing interpolating grand tours. The scheme is optimal in the sense that connecting paths are geodesics in a natural Riemannian geometry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a data visualization tool developed to support prototype development and testing of software standard interfaces in the open systems environment. The visualization capability is an extension of the Clemson Automated Testing System (CATS). CATS is a research facility which has proven valuable in exposing and addressing critical issues in emerging areas such as the IEEE POSIX real-time extensions. Preliminary investigations with CATS involving real-time interfaces and statistical reasoning about large scenarios of tests has motivated the need for a data visualization capability. Current approaches to testing open systems interfaces include very limited visualization aids, such as 2D bar charts to represent statistical information about the test results. The visualization tool developed in this work extends these capabilities by introducing realism and abstraction via raytracing and hierarchical data representations. These capabilities support a more meaningful analysis of system behavior in that a new, more descriptive set of questions is possible. Experimental results achieved with CATS and the visualization of system behavior with respect to deadlines for real-time systems are presented. Future applications for the data visualization tool in the open systems standards arena are proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a comparison between two different optical geometries used in Fourier Transform Profilometry (FTP); crossed- optical-axes geometry and parallel-optical-axes geometry. A mathematical proof is presented to demonstrate that parallel- optical-axes geometry can provide a wider range of measurement than crossed-optical-axes geometry. The FTP method decodes the 3- D shape information from the phase information stored in a 2-D image of the object onto which Ronchi grating is projected. The phase information can be separated from the image signal by two methods; the phase subtraction method and the spectrum shift method. An experimental comparison between two phase extraction methods is presented. The results show that the phase subtraction method is less susceptible to nonlinearity of recording media and systematic optical geometry error. On the other hand, the spectrum shift method is faster in terms of computing time and noise immune. The experimental comparison also demonstrates a noise immune phase unwrapping strategy, based on a minimum spanning tree approach, to form a contiguous map of the object surface.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new stereophotogrammetric analysis and 3D visualization allow accurate assessment of the scoliotic spine during instrumentation. Stereophoto pairs taken at each stage of the operation and robust statistical techniques are used to compute 3D transformations of the vertebrae between stages. These determine rotation, translation, goodness of fit, and overall spinal contour. A polygonal model of the spine using commercial 3D modeling package is used to produce an animation sequence of the transformation. The visualization have provided some important observation. Correction of the scoliosis is achieved largely through vertebral translation and coronal plane rotation, contrary to claims that large axial rotations are required. The animations provide valuable qualitative information for surgeons assessing the results of scoliotic correction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Patients with extreme jaw imbalance must often undergo operative corrections. The goal of therapy is to harmonize the stomatognathic system and an aesthetical correction of the face profile. A new procedure will be presented which supports the maxillo-facial surgeon in planning the operation and which also presents the patient the result of the treatment by video images. Once an x-ray has been digitized it is possible to produce individualized cephalometric analyses. Using a ceph on screen, all current orthognathic operations can be simulated, whereby the bony segments are moved according to given parameters, and a new soft tissue profile can be calculated. The profile of the patient is fed into the computer by way of a video system and correlated to the ceph. Using the simulated operation the computer calculates a new video image of the patient which presents the expected postoperative appearance. In studies of patients treated between 1987-91, 76 out of 121 patients were able to be evaluated. The deviation in profile change varied between .0 and 1.6mm. A side effect of the practical applications was an increase in patient compliance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The complex structure and functional capacity of the mandible places high demands on the design for mandibular reconstructions for graft or transplant purposes. When using the crista iliac as a basis for grafts to bridge large defects, the graft is empirically shaped by the operator according to this experience, whereby it is often necessary to dissect and reconstruct it numerous times. A 3-D computer tomogram of the lower jaw and ilium is carried out on patients undergoing a planned mandible reconstruction. The 3-D CT data are processed in a workstation using a medical image analysis system. The ala of the ilium is superimposed over the region of the lower jaw which is to be replaced. This enables a coincidence of the structure of the lower jaw and the structure of the ilium crest to be formed to within an accuracy of one voxel - despite the complex three dimensional structure and distortions in all three spatial planes. In accordance with the computer simulation, the applicably shaped ilium crest is placed on the individually calculated donor site and transplanted in the resected section of the lower jaw. An exact reconstruction of the lower jaw bone is made possible using computer assisted individual osteotomy design, resulting in complete restoration regarding shape and functionality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are a variety of commercial packages available that will construct iso-surfaces directly from CT, MRI, or PET scans. The problems lie not in the accuracy of the reconstruction but rather in the integration with other commercial, or custom software. Reconstruction of the human mandible must be designed so that finite element analysis can be applied interactively and the mandible can be morphed to reflect growth and other changes. A collection of software tools has been integrated to provide multiple range iso-surface reconstruction, generation of both triangulated and tetrahedral meshes and the incorporation of hierarchical B-Spline bases into an interactive graphical system that permits easy transfer among all of the software components. Thresholding CT, MRI, or PET is accomplished by using a standard implementation of the marching cubes algorithm with a user provided density threshold range. The set of triples generated is then exported to a 3D (alpha) shape generator. Once the alpha shape has been thresholded to reflect the geometry of interest, the data can be exported to a Delaunay tesselator that will produce the triangulated surface mesh, as well as the solid interior tetrahedra. After construction of both the exterior and interior of the objects of interest, in this case a mandible, the surface representation can be exported to other modelers or to finite element packages to perform stress and strain analyses, and morphed to model changes in growth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.