PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9397, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Astrophysics is transforming from a data-starved to a data-swamped discipline, fundamentally changing the nature of scientific inquiry and discovery. New technologies are enabling the detection, transmission, and storage of
data of hitherto unimaginable quantity and quality across the electromagnetic, gravity and particle spectra. The
observational data obtained during this decade alone will supersede everything accumulated over the preceding
four thousand years of astronomy. Currently there are 4 large-scale photometric and spectroscopic surveys underway, each generating and/or utilizing hundreds of terabytes of data per year. Some will focus on the static
universe while others will greatly expand our knowledge of transient phenomena. Maximizing the science from
these programs requires integrating the processing pipeline with high-performance computing resources. These
are coupled to large astrophysics databases while making use of machine learning algorithms with near real-time
turnaround. Here we present an overview of one of these programs, the Palomar Transient Factory (PTF). We
will cover the processing and discovery pipeline we developed at LBNL and NERSC for it and several of the
great discoveries made during the 4 years of observations with PTF.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is a rising trend of data analysis and visualization tasks being performed on a tablet device. Apps with interactive data visualization capabilities are available for a wide variety of domains. We investigate whether users grasp how to effectively interpret and interact with visualizations. We conducted a detailed user evaluation to study the abilities of individuals with respect to analyzing data on a tablet through an interactive visualization app. Based upon the results of the user evaluation, we find that most subjects performed well at understanding and interacting with simple visualizations, specifically tables and line charts. A majority of the subjects struggled with identifying interactive widgets, recognizing interactive widgets with overloaded functionality, and understanding visualizations which do not display data for sorted attributes. Based on our study, we identify guidelines for designers and developers of mobile data visualization apps that include recommendations for effective data representation and interaction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today, users access information and rich media from anywhere using the web browser on their desktop computers,
tablets or smartphones. But the web evolves beyond media delivery. Interactive graphics applications like
visualization or gaming become feasible as browsers advance in the functionality they provide. However, to
deliver large-scale visualization to thin clients like mobile devices, a dedicated server component is necessary.
Ideally, the client runs directly within the browser the user is accustomed to, requiring no installation of a plugin
or native application. In this paper, we present the state-of-the-art of technologies which enable plugin free
remote rendering in the browser. Further, we describe a remote visualization system unifying these technologies.
The system transfers rendering results to the client as images or as a video stream. We utilize the upcoming
World Wide Web Consortium (W3C) conform Web Real-Time Communication (WebRTC) standard, and the
Native Client (NaCl) technology built into Chrome, to deliver video with low latency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowhere is the need to understand large heterogeneous datasets more important than in disaster monitoring
and emergency response, where critical decisions have to be made in a timely fashion and the discovery of
important events requires an understanding of a collection of complex simulations. To gain enough insights
for actionable knowledge, the development of models and analysis of modeling results usually requires that
models be run many times so that all possibilities can be covered. Central to the goal of our research is,
therefore, the use of ensemble visualization of a large scale simulation space to appropriately aid decision makers
in reasoning about infrastructure behaviors and vulnerabilities in support of critical infrastructure analysis. This
requires the bringing together of computing-driven simulation results with the human decision-making process
via interactive visual analysis. We have developed a general critical infrastructure simulation and analysis
system for situationally aware emergency response during natural disasters. Our system demonstrates a scalable
visual analytics infrastructure with mobile interface for analysis, visualization and interaction with large-scale
simulation results in order to better understand their inherent structure and predictive capabilities. To generalize
the mobile aspect, we introduce mobility as a design consideration for the system. The utility and efficacy of
this research has been evaluated by domain practitioners and disaster response managers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Graph visualization continues to be a major challenge in the field of information visualization, meanwhile gaining importance
due to the power of graph-based formulations across a wide variety of domains from knowledge representation
to network flow, bioinformatics, and software optimization. We present the Open Semantic Network Analysis Platform
(OSNAP), an open-source visualization framework designed for the flexible composition of 2D and 3D graph layouts.
Analysts can filter and map a graph’s attributes and structural properties to a variety of geometric forms including shape,
color, and 3D position. Using the Provider Model software engineering pattern, developers can extend the framework with
additional mappings and layout algorithms. We demonstrate the framework’s flexibility by applying it to two separate
domain ontologies and finally outline a research agenda to improve the value of semantic network visualization for human
insight and analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In our daily lives, images and texts are among the most commonly found data which we need to handle. We
present iGraph, a graph-based approach for visual analytics of large image and text collections. Given such a
collection, we compute the similarity between images, the distance between texts, and the connection between
image and text to construct iGraph, a compound graph representation which encodes the underlying relationships
among these images and texts. To enable effective visual navigation and comprehension of iGraph with tens of
thousands of nodes and hundreds of millions of edges, we present a progressive solution that offers collection
overview, node comparison, and visual recommendation. Our solution not only allows users to explore the entire
collection with representative images and keywords, but also supports detailed comparison for understanding and
intuitive guidance for navigation. For performance speedup, multiple GPUs and CPUs are utilized for processing
and visualization in parallel. We experiment with two image and text collections and leverage a cluster driving a
display wall of nearly 50 million pixels. We show the effectiveness of our approach by demonstrating experimental
results and conducting a user study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ongoing research on information visualization has produced an ever-increasing number of visualization designs.
Despite this activity, limited progress has been made in categorizing this large number of information visualizations.
This makes understanding their common design features challenging, and obscures the yet unexplored
areas of novel designs. With this work, we provide categorization from an evolutionary perspective, leveraging a
computational model to represent evolutionary processes, the phylogenetic tree. The result - a phylogenetic tree
of a design corpus of hierarchical visualizations - enables better understanding of the various design features of
hierarchical information visualizations, and further illuminates the space in which the visualizations lie, through
support for interactive clustering and novel design suggestions. We demonstrate these benefits with our software
system, where a corpus of two-dimensional hierarchical visualization designs is constructed into a phylogenetic
tree. This software system supports visual interactive clustering and suggesting for novel designs; the latter
capacity is also demonstrated via collaboration with an artist who sketched new designs using our system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Emotions are one of the unique aspects of human nature, and sadly at the same time one of the elements that
our technological world is failing to capture and consider due to their subtlety and inherent complexity. But
with the current dawn of new technologies that enable the interpretation of emotional states based on techniques
involving facial expressions, speech and intonation, electrodermal response (EDS) and brain-computer interfaces
(BCIs), we are finally able to access real-time user emotions in various system interfaces. In this paper we
introduce emotion-prints, an approach for visualizing user emotional valence and arousal in the context of
multi-touch systems. Our goal is to offer a standardized technique for representing user affective states in the
moment when and at the location where the interaction occurs in order to increase affective self-awareness,
support awareness in collaborative and competitive scenarios, and offer a framework for aiding the evaluation
of touch applications through emotion visualization. We show that emotion-prints are not only independent
of the shape of the graphical objects on the touch display, but also that they can be applied regardless of the
acquisition technique used for detecting and interpreting user emotions. Moreover, our representation can encode
any affective information that can be decomposed or reduced to Russell's two-dimensional space of valence and
arousal. Our approach is enforced by a BCI-based user study and a follow-up discussion of advantages and
limitations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Isosurface extraction is a fundamental technique used for both surface reconstruction and mesh generation. One method
to extract well-formed isosurfaces is a particle system; unfortunately, particle systems can be slow. In this paper, we
introduce an enhanced parallel particle system that uses the closest point embedding as the surface representation to speedup
the particle system for isosurface extraction. The closest point embedding is used in the Closest Point Method (CPM),
a technique that uses a standard three dimensional numerical PDE solver on two dimensional embedded surfaces. To
fully take advantage of the closest point embedding, it is coupled with a Barnes-Hut tree code on the GPU. This new
technique produces well-formed, conformal unstructured triangular and tetrahedral meshes from labeled multi-material
volume datasets. Further, this new parallel implementation of the particle system is faster than any known methods for
conformal multi-material mesh extraction. The resulting speed-ups gained in this implementation can reduce the time from
labeled data to mesh from hours to minutes and benefits users, such as bioengineers, who employ triangular and tetrahedral
meshes
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reconstruction of 2-d image primitives or of 3-d volumetric primitives is one of the most common operations
performed by the rendering components of modern visualization systems. Because this operation is often aided by
GPUs, reconstruction is typically restricted to first-order interpolation. With the advent of in situ visualization,
the assumption that rendering algorithms are in general executed on GPUs is however no longer adequate. We
thus propose a framework that provides versatile texture filtering capabilities: up to third-order reconstruction
using various types of cubic filtering and interpolation primitives; cache-optimized algorithms that integrate
seamlessly with GPGPU rendering or with software rendering that was optimized for cache-friendly "Structure
of Array" (SoA) access patterns; a memory management layer (MML) that gracefully hides the complexities
of extra data copies necessary for memory access optimizations such as swizzling, for rendering on GPGPUs,
or for reconstruction schemes that rely on pre-filtered data arrays. We prove the effectiveness of our software
architecture by integrating it into and validating it using the open source direct volume rendering (DVR) software
DeskVOX.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new approach for view-dependent isosurfacing on volumetric data is described. The approach is designed for client-server environments where the client's computational capabilities are much more limited than those of the server and where the network between the two features bandwidth limits, for example 802.11b wireless. Regions of the dataset that contain no visible part of the isosurface are determined on the server, using an approximate isosurface silhouette and octree-driven processing. The visible regions of interest in the dataset are then transferred to the client for isosurfacing. The approach also enables fast generation of renderings when the viewpoint changes via minimal additional data transfer to the client. Experimental results for application of the approach to volumetric data are also presented here.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatically identifying protein conformations can yield multiple candidate structures. Potential candidates
are examined further to cull false positives. Individual conformations and the collection are compared when
seeking flaws. Desktop displays are ineffective due to limited size and resolution. Thus a user must sacrifice large
scale content by viewing the micro level with high detail or view the macro level while forfeiting small details.
We address this ultimatum by utilizing multiple, high resolution displays. Using 27, 50", high resolution displays
with active, stereoscopic 3D, and modified virtual environment software, each display presents a protein users
can manipulate. Such an environment enables users to gain extensive insight both at the micro and macro levels
when performing structural comparisons among the candidate structures. Integrating stereoscopic 3D improves
the user’s ability to judge conformations spatial relationships. In order to facilitate intuitive interaction, gesture
recognition as well as body tracking are used. The user is able to look at the protein of interest, select a
modality via gesture, and the user’s motions provide intuitive navigation functions such as panning, rotating,
and zooming. Using this approach, users are able to perform protein structure comparison through intuitive
controls without sacrificing important visual details at any scale.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The analysis of equine motion has a long tradition in the past of mankind. Equine biomechanics aims at detecting characteristics
of horses indicative of good performance. Especially, veterinary medicine gait analysis plays an important role in
diagnostics and in the emerging research of long-term effects of athletic exercises. More recently, the incorporation of motion
capture technology contributed to an easier and faster analysis, with a trend from mere observation of horses towards
the analysis of multivariate time-oriented data. However, due to the novelty of this topic being raised within an interdisciplinary
context, there is yet a lack of visual-interactive interfaces to facilitate time series data analysis and information
discourse for the veterinary and biomechanics communities. In this design study, we bring visual analytics technology into
the respective domains, which, to our best knowledge, was never approached before. Based on requirements developed in
the domain characterization phase, we present a visual-interactive system for the exploration of horse motion data. The
system provides multiple views which enable domain experts to explore frequent poses and motions, but also to drill down
to interesting subsets, possibly containing unexpected patterns. We show the applicability of the system in two exploratory
use cases, one on the comparison of different gait motions, and one on the analysis of lameness recovery. Finally, we
present the results of a summative user study conducted in the environment of the domain experts. The overall outcome
was a significant improvement in effectiveness and efficiency in the analytical workflow of the domain experts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A wealth of census data relative to hierarchical administrative subdivisions are now available. It is therefore desirable for
hierarchical data visualization techniques, to offer a spatially consistent representation of such data. This paper focuses
on a widely used technique for hierarchical data, namely treemaps, with a particular emphasis on a specific family of
treemaps, designed to take into account spatial constraints in the layout, called Spatially Dependent Treemap (SDT). The
contributions of this paper are threefold. First, we present the "Weighted Maps", a novel SDT layout algorithm and discuss
the algorithmic differences with the other state-of-the-art SDT algorithms. Second, we present the quantitative results and
analyses of a number of metrics that were used to assess the quality of the resulting layouts. The analyses are illustrated with
figures generated from various datasets. Third, we show that the Weighted Maps algorithm offers a significant advantage
for the layout of large flat cartograms and multilevel hierarchies having a large branching factor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe an approach to measure visualization fidelity for encoding of data to visual attributes based on the number
of unique levels that can be perceived; and a summarization across multiple attributes to compare relative lossiness
across visualization alternatives. These metrics can be assessed at design time in order to compare the lossiness of
different visualizations to aid in the selection between design alternatives. Examples are provided showing the
application of these metrics to two different visualization design scenarios. Limitations and dependencies are noted
along with recommendations for other metrics that can be used in conjunction with fidelity and lossiness to gauge
effectiveness at design-time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Morse decompositions have been proposed to compute and represent the topological structure of steady vector fields.
Compared to the conventional differential topology, Morse decomposition and the resulting Morse Connection Graph
(MCG) is numerically stable. However, the granularity of the original Morse decomposition is constrained by the resolution
of the underlying spatial discretization, which typically results in non-smooth representation. In this work, an Image-Space
Morse decomposition (ISMD) framework is proposed to address this issue. Compared to the original method, ISMD first
projects the original vector field onto an image plane, then computes the Morse decomposition based on the projected field
with pixels as the smallest elements. Thus, pixel-level accuracy can be achieved. This ISMD framework has been applied
to a number of synthetic and real-world steady vector fields to demonstrate its utility. The performance of the ISMD
is carefully studied and reported. Finally, with ISMD an ensemble Morse decomposition can be studied and visualized,
which is shown useful for visualizing the stability of the Morse sets with respect to the error introduced in the numerical
computation and the perturbation to the input vector fields.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As computational capabilities increasingly outpace disk speeds on leading supercomputers, scientists will, in turn, be
increasingly unable to save their simulation data at its native resolution. One solution to this problem is to compress these
data sets as they are generated and visualize the compressed results afterwards. We explore this approach, specifically
subsampling velocity data and the resulting errors for particle advection-based flow visualization. We compare three
techniques: random selection of subsamples, selection at regular locations corresponding to multi-resolution reduction,
and introduce a novel technique for informed selection of subsamples. Furthermore, we explore an adaptive system which
exchanges the subsampling budget over parallel tasks, to ensure that subsampling occurs at the highest rate in the areas that
need it most. We perform supercomputing runs to measure the effectiveness of the selection and adaptation techniques.
Overall, we find that adaptation is very effective, and, among selection techniques, our informed selection provides the most
accurate results, followed by the multi-resolution selection, and with the worst accuracy coming from random subsamples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Where the computation of particle trajectories in classic vector field representations includes computationally involved
numerical integration, a Lagrangian representation in the form of a flow map opens up new alternative ways of trajectory
extraction through interpolation. In our paper, we present a novel re-organization of the Lagrangian representation by
sub-sampling a pre-computed set of trajectories into multiple levels of resolution, maintaining a bound over the amount of
memory mapped by the file system. We exemplify the advantages of replacing integration with interpolation for particle
trajectory calculation through a real-time, low memory cost, interactive exploration environment for the study of flow fields.
Beginning with a base resolution, once an area of interest is located, additional trajectories from other levels of resolution
are dynamically loaded, densely covering those regions of the flow field that are relevant for the extraction of the desired
feature. We show that as more trajectories are loaded, the accuracy of the extracted features converges to the accuracy of
the flow features extracted from numerical integration with the added benefit of real-time, non-iterative, multi-resolution
path and time surface extraction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The density of points within multidimensional clusters can impact the effective representation of distances and groups when
projecting data from higher dimensions onto a lower dimensional space. This paper examines the use of motion to retain
an accurate representation of the point density of clusters that might otherwise be lost when a multidimensional dataset is
projected into a 2D space. We investigate how users interpret motion in 2D scatterplots and whether or not they are able to
effectively interpret the point density of the clusters through motion. Specifically, we consider different types of density-based
motion, where the magnitude of the motion is directly related to the density of the clusters. We conducted a series
of user studies with synthetic datasets to explore how motion can help users in various multidimensional data analyses,
including cluster identification, similarity seeking, and cluster ranking tasks. In a first user study, we evaluated the motions
in terms of task success, task completion times, and subject confidence. Our findings indicate that, for some tasks, motion
outperforms the static scatterplots; circular path motions in particularly give significantly better results compared to the
other motions. In a second user study, we found that users were easily able to distinguish clusters with different densities
as long as the magnitudes of motion were above a particular threshold. Our results indicate that it may be effective to
incorporate motion into visualization systems that enable the exploration and analysis of multidimensional data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color is one of the most important visual variables since it can be combined with any other visual mapping to encode
information without using additional space on the display. Encoding one or two dimensions with color is widely explored
and discussed in the field. Also mapping multi-dimensional data to color is applied in a vast number of applications, either
to indicate similar, or to discriminate between different elements or (multi-dimensional) structures on the screen. A variety
of 2D colormaps exists in literature, covering a large variance with respect to different perceptual aspects. Many of the
colormaps have a different perspective on the underlying data structure as a consequence of the various analysis tasks that
exist for multivariate data. Thus, a large design space for 2D colormaps exists which makes the development and use of
2D colormaps cumbersome. According to our literature research, 2D colormaps have not been subject of in-depth quality
assessment. Therefore, we present a survey of static 2D colormaps as applied for information visualization and related fields.
In addition, we map seven devised quality assessment measures for 2D colormaps to seven relevant tasks for multivariate
data analysis. Finally, we present the quality assessment results of the 2D colormaps with respect to the seven analysis tasks,
and contribute guidelines about which colormaps to select or create for each analysis task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Managing complex data flows and update patterns is one of the most difficult challenges in interactive data visualization. For example, constructing interactive visualizations with multiple linked views can be a daunting task. Functional reactive programming provides approaches for declaratively specifying data dependency graphs and maintaining them automatically. We argue that functional reactive programming is an appropriate and effective abstraction for interactive data visualization. We demonstrate the effectiveness of our proposed approach in several visualization examples including multiple linked views. We also provide a catalog of reusable reactive visualization components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In an emergency situation such as hemorrhage, doctors need to predict which patients need immediate treatment and care. This task is difficult because of the diverse response to hemorrhage in human population. Ensemble physiological simulations provide a means to sample a diverse range of subjects and may have a better chance of containing the correct solution. However, to reveal the patterns and trends from the ensemble simulation is a challenging task. We have developed a visualization framework for ensemble physiological simulations. The visualization helps users identify trends among ensemble members, classify ensemble member into subpopulations for analysis, and provide prediction to future events by matching a new patient’s data to existing ensembles. We demonstrated the effectiveness of the visualization on simulated physiological data. The lessons learned here can be applied to clinically-collected physiological data in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Savors is a visualization framework that supports the ingestion of data streams created by arbitrary command
pipelines. Multiple data streams can be shown synchronized by time in the same or different views, which can be
arranged in any layout. These capabilities combined with a powerful parallelization mechanism and interaction
models already familiar to administrators allows Savors to display complex visualizations of data streamed from
many different systems with minimal effort. This paper presents the design and implementation of Savors and
provides example use cases that illustrate many of the supported visualization types.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most genome browsers display DNA linearly, using single-dimensional depictions that are useful to examine certain epigenetic mechanisms such as DNA methylation. However, these representations are insufficient to visualize intrachromosomal interactions and relationships between distal genome features. Relationships between DNA regions may be difficult to decipher or missed entirely if those regions are distant in one dimension but could be spatially proximal when mapped to three-dimensional space. For example, the visualization of enhancers folding over genes is only fully expressed in three-dimensional space. Thus, to accurately understand DNA behavior during gene expression, a means to model chromosomes is essential. Using coordinates generated from Hi-C interaction frequency data, we have created interactive 3D models of whole chromosome structures and its respective domains. We have also rendered information on genomic features such as genes, CTCF binding sites, and enhancers. The goal of this article is to present the procedure, findings, and conclusions of our models and renderings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ensembles are an important tool for researchers to provide accurate forecasts and proper validation of their models. To
accurately analyze and understand the ensemble data, it is important that researchers clearly and efficiently visualize the
uncertainty of their model output. In this paper, we present two methods for visualizing uncertainty in 1D river model
ensembles. We use the strengths of commonly used techniques for analyzing statistical data, and we apply them to the
2D and 3D visualizations of inundation maps. The resulting visualizations give researchers and forecasters an easy
method to quickly identify the areas of highest probability of inundation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a novel remote visualization system based on particle-based volume rendering (PBVR),1
which enables interactive analyses of extreme scale volume data located on remote computing systems. The re-
mote PBVR system consists of Server, which generates particles for rendering, and Client, which processes volume
rendering, and the particle data size becomes significantly smaller than the original volume data. Depending on
network bandwidth, the level of detail of images is flexibly controlled to attain high frame rates. Server is highly
parallelized on various parallel platforms with hybrid programing model. The mapping process is accelerated
by two orders of magnitudes compared with a single CPU. The structured and unstructured volume data with
~108 cells is processed within a few seconds. Compared with commodity Client/Server visualization tools, the
total processing cost is dramatically reduced by using proposed system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.