We have developed a highly efficient, high fidelity approach for parallel volume rendering that is called permutation warping. Permutation warping may use any one pass filter kernel, an example of which is trilinear reconstruction, an advantage over the shear warp approach. This work discusses experiments in improving permutation warping using data dependent optimizations to make it more competitive in speed with the shear warp algorithm. We use a linear octree on each processor for collapsing homogeneous regions and eliminating empty space. Static load balancing is also used to redistribute nodes from a processor's octree to achieve higher efficiencies. In studies on a 16384 processor MasPar MP-2, we have measured improvements of 3 to 5 times over our previous results. Run times are 73 milliseconds, 29 Mvoxels/second, or 14 frames/second for 1283 volumes, the fastest MasPar volume rendering numbers in the literature. Run times are 427 milliseconds, 39 Mvoxels/second, or 2 frames/second for 2563 volumes. The performance numbers show that coherency adaptations are effective for permutation warping. Because permutation warping has good scalability characteristics, it proves to be a superior approach for massively parallel computers when image fidelity is a required feature. We have provided further evidence for the utility of permutation warping as a scalable, high fidelity, and high performance approach to parallel volume visualization.
Volumetric irregular grids are the next frontier to conquer in interactive 3D graphics. Visualization algorithms for rectilinear 2563 data volumes have been optimized to achieve one frame/second to 15 frames/second depending on the workstation. With equivalent computational resources, irregular grids with millions of cells may take minutes to render for a new viewpoint. The state of the art for graphics rendering, PixelFlow, provides screen and object space parallelism for polygonal rendering. Unfortunately volume rendering of irregular data is at odds with the sort last architecture. I investigate parallel algorithms for direct volume rendering on PixelFlow that generalize to other compositing architectures. Experiments are performed on the Nasa Langley fighter dataset, using the projected tetrahedra approach of Shirley and Tuchman. Tetrahedral sorting is done by the circumscribing sphere approach of Cignoni et al. Key approaches include sort-first on sort-last, world space subdivision by clipping, rearrangeable linear compositing for any view angle, and static load balancing. The new world space subdivision by clipping provides for efficient and correct rendering of unstructured data by using object space clipping planes. Research results include performance estimates on PixelFlow for irregular grid volume rendering. PixelFlow is estimated to achieve 30 frames/second on irregular grids of 300,00 tetrahedra or 10 million tetrahedra per second.
In this paper we present a system for visualizing volume data from remote supercomputers. We have developed both parallel volume rendering algorithms, and the World Wide Web (WWW) software for accessing the data at the remote sites. The implementation uses Hypertext Markup Language, Java, and Common Gateway Interface scripts to connect WWW servers/clients to our volume renderers. The front ends are interactive Java classes for specification of view, shading , and classification inputs. We present performance results, and implementation details for connections to our computing resources at the University of California Santa Cruz including a MasPar MP-2, SGI Reality Engine-RE2, and SGI Challenge machines. We apply the system to the task of visualizing trabecular bone from finite element simulations. Fast volume rendering on remote compute servers through a web interface allows us to increase the accessibility of the results to more users. User interface issues, overview of parallel algorithm developments, and overall system interfaces and protocols are presented. Access is available through Uniform Resource Locator http://www.cse.ucsc.edu/research/slvg/.
We process multispectral satellite imagery to load into our environmental database on the UCSC/NPS/MBARI-REINAS project. We have developed methods for segmenting GOES (Geostationary Operational Environmental Satellite) images that take advantage of the multispectral data available. Our algorithm performs classification of different types of clouds, as well as characterization of the cloud elevations. The resulting information is used to incorporate the texture mapped satellite imagery into a combined model/measurement visualization. The approximate cloud elevations, types, and opacities are used to develop a three-dimensional model of the cloud for use in visualization. Discrete Karhunen-Loeve transformations, or Hotelling transformations, are used to calculate the principle components from the multispectral data. The accurate segmentation and feature extraction of the clouds assists in validation and evaluation of atmospheric prediction models with true remotely sensed data. We demonstrate the integrated measurement model visualization with an open GL application using texture mapping. The spectral data is also used to control the free parameters in the texture mapping of the modeled clouds. We are working on further improvements to develop novel compression techniques utilizing the KLT with segmentation and feature extraction, and also hope to develop algorithms that visualize the compressed imagery directly.
Environmental data have inherent uncertainty which is often ignored in visualization. For example, meteorological stations measure wind with good accuracy, but winds are often averaged over minutes or hours. As another example, Doppler radars (wind profilers and ocean current radars) take thousands of samples and average the possibly spurious returns. Others, including time series data, have a wealth of uncertainty information that the traditional vector visualization methods such as using wind barbs and arrow glyphs simply ignore. We have developed new vector glyphs to visualize uncertain winds and ocean currents. Our approach is to include uncertainty in direction and magnitude, as well as the mean direction and length, in vector glyph plots. Our glyphs show the variation in uncertainty, and provide fair comparisons of data from instruments, models, and time averages of varying certainty. We use both qualitative and quantitative methods to compare our glyphs to traditional ones. Subjective comparison tests with experts (meteorologists and oceanographers) are provided, as well as objective tests (data ink manximization), where the information density of our new glyphs and traditional glyphs are compared. We have shown that visualizing data together with their uncertainty information enhances the understanding of the continuous range of data quality in environmental vector fields.
We present the design and implementation of Collaborative Spray or CSpray (pronounced 'sea spray'). CSpray is a CSCW (Computer Supported Cooperative Work) application geared towards supporting multiple users in a collaborative scientific visualization setting. Scientists are allowed to share data sets, graphics primitives, images, and create visualization products within a view independent shared workspace. CSpray supports incremental updates to reduce network traffic, separates large data streams from smaller command streams with a two level communication strategy, provides different service levels according to client's resources, enforces permissions for different levels of sharing, distinguishes private from public resources, and provides multiple fair and intuitive floor control schemes for shared objects. Off the shelf multimedia tools such as nv and vat can be used concurrently. CSpray is based on the spray rendering visualization interaction technique to generate contours, surfaces, particles, and other graphics primitives from scientific data sets such as those found in oceanography and meteorology.
Robert Haralick, Arun Somani, Craig Wittenbrink, Robert Johnson, Kenneth Cooper, Linda Shapiro, Ihsin Phillips, Jenq Hwang, William Cheung, Yung Yao, Chung-Ho Chen, Larry Yang, Brian Daugherty, Bob Lorbeski, Kent Loving, Tom Miller, Larye Parkins, Steven Soos
KEYWORDS: Image processing, Machine vision, Process control, Telecommunications, Computer vision technology, Signal processing, Control systems, Data processing, Computer architecture, Binary data
The Proteus architecture is a highly parallel MIMD, multiple instruction, multiple-data machine, optimized for large granularity tasks such as machine vision and image processing The system can achieve 20 Giga-flops (80 Giga-flops peak). It accepts data via multiple serial links at a rate of up to 640 megabytes/second. The system employs a hierarchical reconfigurable interconnection network with the highest level being a circuit switched Enhanced Hypercube serial interconnection network for internal data transfers. The system is designed to use 256 to 1,024 RISC processors. The processors use one megabyte external Read/Write Allocating Caches for reduced multiprocessor contention. The system detects, locates, and replaces faulty subsystems using redundant hardware to facilitate fault tolerance. The parallelism is directly controllable through an advanced software system for partitioning, scheduling, and development. System software includes a translator for the INSIGHT language, a parallel debugger, low and high level simulators, and a message passing system for all control needs. Image processing application software includes a variety of point operators neighborhood, operators, convolution, and the mathematical morphology operations of binary and gray scale dilation, erosion, opening, and closing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.