Proceedings Volume 8654

Visualization and Data Analysis 2013

Pak Chung Wong, David L. Kao, Ming C. Hao, et al.
cover
Proceedings Volume 8654

Visualization and Data Analysis 2013

Pak Chung Wong, David L. Kao, Ming C. Hao, et al.
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 18 January 2013
Contents: 12 Sessions, 35 Papers, 0 Presentations
Conference: IS&T/SPIE Electronic Imaging 2013
Volume Number: 8654

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 8654
  • High Dimensional and Multi-Focus Visualization
  • Multivariate Time Series
  • Flow and Volume Visualization
  • High Performance Computing
  • Keynote Session I
  • Biomedical Visualization
  • Human Factors
  • Exploratory Data Analysis
  • Data Analysis Techniques
  • Interactive Techniques
  • Interactive Paper Session
Front Matter: Volume 8654
icon_mobile_dropdown
Front Matter: Volume 8654
This PDF file contains the front matter associated with SPIE Proceedings Volume 8654 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
High Dimensional and Multi-Focus Visualization
icon_mobile_dropdown
An interactive visual testbed system for dimension reduction and clustering of large-scale high-dimensional data
Jaegul Choo, Hanseung Lee, Zhicheng Liu, et al.
Many of the modern data sets such as text and image data can be represented in high-dimensional vector spaces and have benefited from computational methods that utilize advanced computational methods. Visual analytics approaches have contributed greatly to data understanding and analysis due to their capability of leveraging humans’ ability for quick visual perception. However, visual analytics targeting large-scale data such as text and image data has been challenging due to the limited screen space in terms of both the numbers of data points and features to represent. Among various computational methods supporting visual analytics, dimension reduction and clustering have played essential roles by reducing these numbers in an intelligent way to visually manageable sizes. Given numerous dimension reduction and clustering methods available, however, the decision on the choice of algorithms and their parameters becomes difficult. In this paper, we present an interactive visual testbed system for dimension reduction and clustering in a large-scale high-dimensional data analysis. The testbed system enables users to apply various dimension reduction and clustering methods with different settings, visually compare the results from different algorithmic methods to obtain rich knowledge for the data and tasks at hand, and eventually choose the most appropriate path for a collection of algorithms and parameters. Using various data sets such as documents, images, and others that are already encoded in vectors, we demonstrate how the testbed system can support these tasks.
Multi-focus and multi-level techniques for visualization and analysis of networks with thematic data
Michele Cossalter, Ole J. Mengshoel, Ted Selker
Information-rich data sets bring several challenges in the areas of visualization and analysis, even when associated with node-link network visualizations. This paper presents an integration of multi-focus and multi-level techniques that enable interactive, multi-step comparisons in node-link networks. We describe NetEx, a visualization tool that enables users to simultaneously explore different parts of a network and its thematic data, such as time series or conditional probability tables. NetEx, implemented as a Cytoscape plug-in, has been applied to the analysis of electrical power networks, Bayesian networks, and the Enron e-mail repository. In this paper we briefly discuss visualization and analysis of the Enron social network, but focus on data from an electrical power network. Specifically, we demonstrate how NetEx supports the analytical task of electrical power system fault diagnosis. Results from a user study with 25 subjects suggest that NetEx enables more accurate isolation of complex faults compared to an especially designed software tool.
Multivariate Time Series
icon_mobile_dropdown
Visual analytics of cyber physical data streams using spatio-temporal radial pixel visualization
M. Hao, M. Marwah, S. Mittelstaedt, et al.
Cyber physical systems (CPS), such as smart buildings and data centers, are richly instrumented systems composed of tightly coupled computational and physical elements that generate large amounts of data. To explore CPS data and obtain actionable insights, we present a new approach called Radial Pixel Visualization (RPV); which uses multiple concentric rings to show the data in a compact circular layout of pixel cells, each ring containing the values for a specific variable over time and each pixel cell representing an individual data value at a specific time. RPV provides an effective visual representation of locality and periodicity of the high volume, multivariate data streams. RPVs may have an additional analysis ring for highlighting the results of correlation analysis or peak point detection. Our real-world applications demonstrate the effectiveness of this approach. The application examples show how RPV can help CPS administrators to identify periodic thermal hot spots, find root-causes of the cooling problems, understand building energy consumption, and optimize IT-services workloads.
Exploring large scale time-series data using nested timelines
Zaixian Xie, Matthew O. Ward, Elke A. Rundensteiner
When data analysts study time-series data, an important task is to discover how data patterns change over time. If the dataset is very large, this task becomes challenging. Researchers have developed many visualization techniques to help address this problem. However, little work has been done regarding the changes of multivariate patterns, such as linear trends and clusters, on time-series data. In this paper, we describe a set of history views to fill this gap. This technique works under two modes: merge and non-merge. For the merge mode, merge algorithms were applied to selected time windows to generate a change-based hierarchy. Contiguous time windows having similar patterns are merged first. Users can choose different levels of merging with the tradeoff between more details in the data and less visual clutter in the visualizations. In the non-merge mode, the framework can use natural hierarchical time units or one defined by domain experts to represent timelines. This can help users navigate across long time periods. Gridbased views were designed to provide a compact overview for the history data. In addition, MDS pattern starfields and distance maps were developed to enable users to quickly investigate the degree of pattern similarity among different time periods. The usability evaluation demonstrated that most participants could understand the concepts of the history views correctly and finished assigned tasks with a high accuracy and relatively fast response time.
Flow and Volume Visualization
icon_mobile_dropdown
Visibility-difference entropy for automatic transfer function generation
Philipp Schlegel, Renato Pajarola
Direct volume rendering allows for interactive exploration of volumetric data and has become an important tool in many visualization domains. But the insight and information that can be obtained are dependent on the transfer function defining the transparency of voxels. Constructing good transfer functions is one of the most time consuming and cumbersome tasks in volume visualization. We present a novel general purpose method for automatically generating an initial set of best transfer function candidates. The generated transfer functions reveal the major structural features within the volume and allow for an efficient initial visual analysis, serving as a basis for further interactive exploration in particular of originally unknown data. The basic idea is to introduce a metric as a measure of the goodness of a transfer function which indicates the information that can be gained from rendered images by interactive visualization. In contrast to prior methods, our approach does not require a user feedback-loop, operates exclusively in image space and takes the characteristics of interactive data exploration into account. We show how our new transfer function generation method can uncover the major structures of an unknown dataset within only a few minutes.
Coherent view-dependent streamline selection for importance-driven flow visualization
Jun Ma, Chaoli Wang, Ching-Kuang Shene
Streamline visualization can be formulated as the problem of streamline placement or streamline selection. In this paper, we present an importance-driven approach to view-dependent streamline selection that guarantees coherent streamline update when the view changes gradually. Given a large number of randomly or uniformly seeded and traced streamlines and sample viewpoints, our approach evaluates, for each streamline, the view-dependent importance by considering the amount of information shared by the 3D streamline and its 2D projection as well as how stereoscopic the streamline’s shape is reflected under each viewpoint. We achieve coherent view-dependent streamline selection following a two-pass solution that considers i) the relationships between local viewpoints and the global streamline set selected in a view-independent manner and ii) the continuity between adjacent viewpoints. We demonstrate the effectiveness of our approach with several synthesized and simulated flow fields and compare our view-dependent streamline selection algorithm with a naïve algorithm that selects streamlines solely based on the information at the current viewpoint.
Single-pass GPU-raycasting for structured adaptive mesh refinement data
Ralf Kaehler, Tom Abel
Structured Adaptive Mesh Refinement (SAMR) is a popular numerical technique to study processes with high spatial and temporal dynamic range. It reduces computational requirements by adapting the lattice on which the underlying differential equations are solved to most efficiently represent the solution. Particularly in astrophysics and cosmology such simulations now can capture spatial scales ten orders of magnitude apart and more. The irregular locations and extensions of the refined regions in the SAMR scheme and the fact that different resolution levels partially overlap, poses a challenge for GPU-based direct volume rendering methods. kD-trees have proven to be advantageous to subdivide the data domain into non-overlapping blocks of equally sized cells, optimal for the texture units of current graphics hardware, but previous GPU-supported raycasting approaches for SAMR data using this data structure required a separate rendering pass for each node, preventing the application of many advanced lighting schemes that require simultaneous access to more than one block of cells. In this paper we present the first single-pass GPU-raycasting algorithm for SAMR data that is based on a kD-tree. The tree is efficiently encoded by a set of 3D-textures, which allows to adaptively sample complete rays entirely on the GPU without any CPU interaction. We discuss two different data storage strategies to access the grid data on the GPU and apply them to several datasets to prove the benefits of the proposed method.
High Performance Computing
icon_mobile_dropdown
Multi-user smartphone-based interaction with large high-resolution displays
Lynn Nguyen, Jürgen Schulze
We investigate the practicality of using smartphones to interact with large high-resolution displays, such as tiled display walls. To accomplish such a task, we found that in most cases it is not necessary to find the spatial location of the phone relative to the display, rather we can identify the object a user wants to interact with through image recognition. The interaction with the object itself can be done by using the smartphone as the medium. This trivially allows multi-user interaction with a large display wall, provided that each user has a smartphone. To investigate the feasibility of this concept we implemented a prototype.
Stereo frame decomposition for error-constrained remote visualization
Steven Martin, Han-Wei Shen
As growth in dataset sizes continues to exceed growth in available bandwidth, new solutions are needed to facilitate efficient visual analysis workflows. Remote visualization can enable the colocation of visual analysis compute resources with simulation compute resources, reducing the impact of bandwidth constraints. While there are many off-the-shelf solutions available for general remoting needs, there is substantial room for improvement in the interactivity they offer, and none focus on supporting stereo remote visualization with programmable error bounds. We propose a novel system enabling efficient compression of stereo video streams using standard codecs that can be integrated with existing remoting solutions, while at the same time offering error constraints that provide users with fidelity guarantees. By taking advantage of interocular coherence, the flexibility permitted by error constraints, and knowledge of scene depth and camera information, our system offers improved remote visualization frame rates.
Keynote Session I
icon_mobile_dropdown
Why high performance visual data analytics is both relevant and difficult
E. Wes Bethel, Prabhat Prabhat, Suren Byna, et al.
Data visualization, as well as data analysis and data analytics, are all an integral part of the scientific process. Collectively, these technologies provide the means to gain insight into data of ever-increasing size and complexity. Over the past two decades, a substantial amount of visualization, analysis, and analytics R&D has focused on the challenges posed by increasing data size and complexity, as well as on the increasing complexity of a rapidly changing computational platform landscape. While some of this research focuses on solely on technologies, such as indexing and searching or novel analysis or visualization algorithms, other R&D projects focus on applying technological advances to specific application problems. Some of the most interesting and productive results occur when these two activities-R&D and application-are conducted in a collaborative fashion, where application needs drive R&D, and R&D results are immediately applicable to real-world problems.
Biomedical Visualization
icon_mobile_dropdown
Three-dimensional volume analysis of vasculature in engineered tissues
Mohammed YousefHussien, Kelley Garvin, Diane Dalecki, et al.
Three-dimensional textural and volumetric image analysis holds great potential in understanding the image data produced by multi-photon microscopy. In this paper, an algorithm that quantitatively analyzes the texture and the morphology of vasculature in engineered tissues is proposed. The investigated 3D artificial tissues consist of Human Umbilical Vein Endothelial Cells (HUVEC) embedded in collagen exposed to two regimes of ultrasound standing wave fields under different pressure conditions. Textural features were evaluated using the normalized Gray-Scale Cooccurrence Matrix (GLCM) combined with Gray-Level Run Length Matrix (GLRLM) analysis. To minimize error resulting from any possible volume rotation and to provide a comprehensive textural analysis, an averaged version of nine GLCM and GLRLM orientations is used. To evaluate volumetric features, an automatic threshold using the gray level mean value is utilized. Results show that our analysis is able to differentiate among the exposed samples, due to morphological changes induced by the standing wave fields. Furthermore, we demonstrate that providing more textural parameters than what is currently being reported in the literature, enhances the quantitative understanding of the heterogeneity of artificial tissues.
3D surface reconstruction and visualization of the Drosophila wing imaginal disc at cellular resolution
Linge Bai, Thomas Widmann, Frank Jülicher, et al.
Quantifying and visualizing the shape of developing biological tissues provide information about the morphogenetic processes in multicellular organisms. The size and shape of biological tissues depend on the number, size, shape, and arrangement of the constituting cells. To better understand the mechanisms that guide tissues into their final shape, it is important to investigate the cellular arrangement within tissues. Here we present a data processing pipeline to generate 3D volumetric surface models of epithelial tissues, as well as geometric descriptions of the tissues’ apical cell cross-sections. The data processing pipeline includes image acquisition, editing, processing and analysis, 2D cell mesh generation, 3D contourbased surface reconstruction, cell mesh projection, followed by geometric calculations and color-based visualization of morphological parameters. In their first utilization we have applied these procedures to construct a 3D volumetric surface model at cellular resolution of the wing imaginal disc of Drosophila melanogaster. The ultimate goal of the reported effort is to produce tools for the creation of detailed 3D geometric models of the individual cells in epithelial tissues. To date, 3D volumetric surface models of the whole wing imaginal disc have been created, and the apicolateral cell boundaries have been identified, allowing for the calculation and visualization of cell parameters, e.g. apical cross-sectional area of cells. The calculation and visualization of morphological parameters show position-dependent patterns of cell shape in the wing imaginal disc. Our procedures should offer a general data processing pipeline for the construction of 3D volumetric surface models of a wide variety of epithelial tissues.
Human Factors
icon_mobile_dropdown
Visual exploration and analysis of human-robot interaction rules
Hui Zhang, Michael J. Boyles
We present a novel interaction paradigm for the visual exploration, manipulation and analysis of human-robot interaction (HRI) rules; our development is implemented using a visual programming interface and exploits key techniques drawn from both information visualization and visual data mining to facilitate the interaction design and knowledge discovery process. HRI is often concerned with manipulations of multi-modal signals, events, and commands that form various kinds of interaction rules. Depicting, manipulating and sharing such design-level information is a compelling challenge. Furthermore, the closed loop between HRI programming and knowledge discovery from empirical data is a relatively long cycle. This, in turn, makes design-level verification nearly impossible to perform in an earlier phase. In our work, we exploit a drag-and-drop user interface and visual languages to support depicting responsive behaviors from social participants when they interact with their partners. For our principal test case of gaze-contingent HRI interfaces, this permits us to program and debug the robots’ responsive behaviors through a graphical data-flow chart editor. We exploit additional program manipulation interfaces to provide still further improvement to our programming experience: by simulating the interaction dynamics between a human and a robot behavior model, we allow the researchers to generate, trace and study the perception-action dynamics with a social interaction simulation to verify and refine their designs. Finally, we extend our visual manipulation environment with a visual data-mining tool that allows the user to investigate interesting phenomena such as joint attention and sequential behavioral patterns from multiple multi-modal data streams. We have created instances of HRI interfaces to evaluate and refine our development paradigm. As far as we are aware, this paper reports the first program manipulation paradigm that integrates visual programming interfaces, information visualization, and visual data mining methods to facilitate designing, comprehending, and evaluating HRI interfaces.
Emotion scents: a method of representing user emotions on GUI widgets
Daniel Cernea, Christopher Weber, Achim Ebert, et al.
The world of desktop interfaces has been dominated for years by the concept of windows and standardized user interface (UI) components. Still, while supporting the interaction and information exchange between the users and the computer system, graphical user interface (GUI) widgets are rather one-sided, neglecting to capture the subjective facets of the user experience. In this paper, we propose a set of design guidelines for visualizing user emotions on standard GUI widgets (e.g., buttons, check boxes, etc.) in order to enrich the interface with a new dimension of subjective information by adding support for emotion awareness as well as post-task analysis and decision making. We highlight the use of an EEG headset for recording the various emotional states of the user while he/she is interacting with the widgets of the interface. We propose a visualization approach, called emotion scents, that allows users to view emotional reactions corresponding to di erent GUI widgets without in uencing the layout or changing the positioning of these widgets. Our approach does not focus on highlighting the emotional experience during the interaction with an entire system, but on representing the emotional perceptions and reactions generated by the interaction with a particular UI component. Our research is motivated by enabling emotional self-awareness and subjectivity analysis through the proposed emotionenhanced UI components for desktop interfaces. These assumptions are further supported by an evaluation of emotion scents.
Exploratory Data Analysis
icon_mobile_dropdown
Visual analysis of situationally aware building evacuations
Jack Guest, Todd Eaglin, Kalpathi Subramanian, et al.
Rapid evacuation of large urban structures (campus buildings, arenas, stadiums, etc.) is a complex operation and of prime interest to emergency responders and planners. Although there is a considerable body of work in evacuation algorithms and methods, most of these are impractical to use in real-world scenarios (non real-time, for instance) or have difficulty handling scenarios with dynamically changing conditions. Our goal in this work is towards developing computer visualizations and real-time visual analytic tools for building evacuations, in order to provide situational awareness and decision support to first responders and emergency planners. We have augmented traditional evacuation algorithms in the following important ways, (1) facilitate real-time complex user interaction with first responder teams, as information is received during an emergency, (2) visual reporting tools for spatial occupancy, temporal cues, and procedural recommendations are provided automatically and at adjustable levels, and (3) multi-scale building models, heuristic evacuation models, and unique graph manipulation techniques for producing near real-time situational awareness. We describe our system, methods and their application using campus buildings as an example. We also report the results of evaluating our system in collaboration with our campus police and safety personnel, via a table-top exercise consisting of 3 different scenarios, and their resulting assessment of the system.
Improving projection-based data analysis by feature space transformations
Matthias Schaefer, Leishi Zhang, Tobias Schreck, et al.
Generating effective visual embedding of high-dimensional data is difficult - the analyst expects to see the structure of the data in the visualization, as well as patterns and relations. Given the high dimensionality, noise and imperfect embedding techniques, it is hard to come up with a satisfactory embedding that preserves the data structure well, whilst highlighting patterns and avoiding visual clutters at the same time. In this paper, we introduce a generic framework for improving the quality of an existing embedding in terms of both structural preservation and class separation by feature space transformations. A compound quality measure based on structural preservation and visual clutter avoidance is proposed to access the quality of embeddings. We evaluate the effectiveness of our approach by applying it to several widely used embedding techniques using a set of benchmark data sets and the result looks promising.
Does interactive animation control improve exploratory data analysis of animated trend visualization?
Felwa A. Abukhodair, Bernhard E. Riecke, Halil I. Erhan, et al.
OBJECTIVE: Effectively analyzing trends of temporal data becomes a critical task when the amount of data is large. Motion techniques (animation) for scatterplots make it possible to represent lots of data in a single view and make it easy to identify trends and highlight changes. These techniques have recently become very popular and to an extent successful in describing data in presentations. However, compared to static methods of visualization, scatterplot animations may be hard to perceive when the motions are complex. METHODS: This paper studies the effectiveness of interactive scatterplot animation as a visualization technique for data analysis of large data. We compared interactive animations with non-interactive (passive) animations where participants had no control over the animation. Both conditions were evaluated for specific as well as general comprehension of the data. RESULTS: While interactive animation was more effective for specific information analysis, it led to many misunderstandings in the overall comprehension due to the fragmentation of the animation. In general, participants felt that interactivity gave them more confidence and found it more enjoyable and exciting for data exploration. CONCLUSION: Interactive animation of trend visualizations proved to be an effective technique for exploratory data analysis and significantly more accurate than animation alone. With these findings we aim at supporting the use of interactivity to effectively enhance data exploration in animated visualizations.
Data Analysis Techniques
icon_mobile_dropdown
iMap: a stable layout for navigating large image collections with embedded search
Chaoli Wang, John P. Reese, Huan Zhang, et al.
Effective techniques for organizing and visualizing large image collections are in growing demand as visual search gets increasingly popular. Targeting an online astronomy archive with thousands of images, we present our solution for image search and clustering based on the evaluation image similarity using both visual and textual information. To lay out images, we introduce iMap, a treemap-based representation for visualizing and navigating image search and clustering results. iMap not only makes effective use of available display area to arrange images but also maintains stable update when images are inserted or removed during the query. We also develop an embedded visualization that integrates image tags for in-place search refinement. We show the effectiveness of our approach by demonstrating experimental results and conducting a comparative user study.
uVis Studio: an integrated development environment for visualization
Kostas Pantazos, Mohammad A. Kuhail, Soren Lauesen, et al.
A toolkit facilitates the visualization development process. The process can be further enhanced by integrating the toolkits in development environments. This paper describes how the uVis toolkit, a formula-based visualiza- tion toolkit, has been extended with a development environment, called uVis Studio. Instead of programming, developers apply a Drag-Drop-Set-View-Interact approach. Developers bind controls to data, and the Studio gives immediate visual feedback in the Design Panel. This is a novel feature, called What-You-Bind-Is-What- You-Get. The Studio also provides Modes that allow developers to interact and view the visualization from the end-user's perspective without switching workspace, and Auto-Completion; a feature of the Property Grid that provides suggestions not only for the formula language syntax but also for the tables, the table elds and the relationships in the database. We conducted a usability study with six developers to evaluate if the Studio and its features enhance cognition and facilitate the visualization development. The results show that developers appreciated the Drag-Drop-Set- View-Interact approach, the What-You-Bind-Is-What-You-Get, the Auto-Completion and the Modes. Several usability problems were identified, and some suggestions for improvement include: new panels, better presentation of the Modes, and better error messages.
Interactive Techniques
icon_mobile_dropdown
Interactive visual comparison of multimedia data through type-specific views
Russ Burtner, Shawn Bohn, Debbie Payne
Analysts who work with collections of multimedia to perform information foraging understand how difficult it is to connect information across diverse sets of mixed media. The wealth of information from blogs, social media, and news sites often can provide actionable intelligence; however, many of the tools used on these sources of content are not capable of multimedia analysis because they only analyze a single media type. As such, analysts are taxed to keep a mental model of the relationships among each of the media types when generating the broader content picture. To address this need, we have developed Canopy, a novel visual analytic tool for analyzing multimedia. Canopy provides insight into the multimedia data relationships by exploiting the linkages found in text, images, and video co-occurring in the same document and across the collection. Canopy connects derived and explicit linkages and relationships through multiple connected visualizations to aid analysts in quickly summarizing, searching, and browsing collected information to explore relationships and align content. In this paper, we will discuss the features and capabilities of the Canopy system and walk through a scenario illustrating how this system might be used in an operational environment.
Evaluating multivariate visualizations on time-varying data
Mark A. Livingston, Jonathan W. Decker, Zhuming Ai
Multivariate visualization techniques have been applied to a wide variety of visual analysis tasks and a broad range of data types and sources. Their utility has been evaluated in a modest range of simple analysis tasks. In this work, we extend our previous task to a case of time-varying data. We implemented ve visualizations of our synthetic test data: three previously evaluated techniques (Data-driven Spots, Oriented Slivers, and Attribute Blocks), one hybrid of the rst two that we call Oriented Data-driven Spots, and an implementation of Attribute Blocks that merges the temporal slices. We conducted a user study of these ve techniques. Our previous nding (with static data) was that users performed best when the density of the target (as encoded in the visualization) was either highest or had the highest ratio to non-target features. The time-varying presentations gave us a wider range of density and density gains from which to draw conclusions; we now see evidence for the density gain as the perceptual measure, rather than the absolute density.
Multi-focus and multi-window techniques for interactive network exploration
Priya Krishnan Sundarararajan, Ole J. Mengshoel, Ted Selker
Networks analysts often need to compare nodes in different parts of a network. When zoomed to fit a computer screen, the detailed structure and node labels of even a moderately-sized network (say, with 500 nodes) can become invisible or difficult to read. Still, the coarse network structure typically remains visible, and helps orient an analyst’s zooming, scrolling, and panning operations. These operations are very useful when studying details and reading node labels, but in the process of zooming in on one network region, an analyst may lose track of details elsewhere. To address such problems, we present in this paper multi-focus and multi-window techniques that improve interactive exploration of networks. Based on an analyst’s selection of focus nodes, our techniques partition and selectively zoom in on network details, including node labels, close to the focus nodes. Detailed data associated with the zoomed-in nodes can thus be more easily accessed and inspected. The approach enables a user to simultaneously focus on and analyze multiple node neighborhoods while keeping the full network structure in view. We demonstrate our technique by showing how it supports interactive debugging of a Bayesian network model of an electrical power system. In addition, we show that it can simplify visual analysis of an electrical power network as well as a medical Bayesian network.
Interactive Paper Session
icon_mobile_dropdown
Effective color combinations in isosurface visualization
Sussan Einakian, Timothy S. Newman
The suitability of three classes of strategies governing color combination selection in isosurface-based volume visualization are explored. These classes are use of: (1) harmonious color combinations, (2) disharmonious color combinations, and (3) opponent color combinations. Suitability is assessed here via user evaluation of renderings of isosurfaces with multiple (nested) components. The significance of these user evaluations are also analyzed statistically.
Web tools for rapid experimental visualization prototyping
Jonathan W. Decker, Mark A. Livingston
Quite often a researcher finds themselves looking at spreadsheets of high-dimensional data generated by experimental models and user studies. We can use analysis to challenge or confirm hypothesis, but unexpected results can easily be lost in the shuffle. For this reason, it would be useful to visualize the results so we can explore our data and make new discoveries. Web browsers have become increasingly capable for creating complex, multi-view applications. Javascript is quickly becoming a de facto standard for scripting, online and offline. This work demonstrates the use of web technologies as a powerful tool for rapid visualization prototyping. We have developed two prototypes: One for high-dimensional results of the abELICIT - multi-agent version of the ELICIT platform tasked with collaborating to identify the parameters of a pending attack. Another prototype displays responses to a user study on the effectiveness of multi-layer visualization techniques. We created coordinated multiple views prototypes in the Google Chrome web browser written in Javascript, CSS and HTML. We will discuss the benefits and shortcomings of this approach.
Time-based user-movement pattern analysis from location-based social network data
Huey Ling Chuan, Isaraporn Kulkumjon, Surbhi Dangi
Virtual social interactions play an increasingly important role in the discovery of places with digital recommendations. Our hypothesis is that people define the character of a city by the type of places they frequent. With a brief description of our dataset, anomalies and observations about the data, this paper delves into three distinct approaches to visualize the dataset addressing our two goals of: 1. Arriving at a time-based region specific recommendation logic for different types of users classified by the places they frequent. 2. Analyzing the behaviors of users that check-in in groups of two or more people. The study revealed that distinct patterns exist for people that are residents of the city and for people who are short-term visitors to the city. The frequency of visits, however, is both dependent on the time of the day as well as the urban area itself (e.g. eateries, offices, local attractions). The observations can be extended for application in food and travel recommendation engines as well as for research in urban analytics, smart cities and town planning.
Visualizing vascular structures in virtual environments
Thomas Wischgoll
In order to learn more about the cause of coronary heart diseases and develop diagnostic tools, the extraction and visualization of vascular structures from volumetric scans for further analysis is an important step. By determining a geometric representation of the vasculature, the geometry can be inspected and additional quantitative data calculated and incorporated into the visualization of the vasculature. To provide a more user-friendly visualization tool, virtual environment paradigms can be utilized. This paper describes techniques for interactive rendering of large-scale vascular structures within virtual environments. This can be applied to almost any virtual environment configuration, such as CAVE-type displays. Specifically, the tools presented in this paper were tested on a Barco I-Space and a large 62x108 inch passive projection screen with a Kinect sensor for user tracking.
A combined multidimensional scaling and hierarchical clustering view for the exploratory analysis of multidimensional data
This paper describes a novel information visualization technique that combines multidimensional scaling and hierarchical clustering to support the exploratory analysis of multidimensional data. The technique displays the results of multidimensional scaling using a scatter plot where the proximity of any two items' representations is approximate to their similarity according to a Euclidean distance metric. The results of hierarchical clustering are overlaid onto this view by drawing smoothed outlines around each nested cluster. The difference in similarity between successive cluster combinations is used to colour code clusters and make stronger natural clusters more prominent in the display. When a cluster or group of items is selected, multidimensional scaling and hierarchical clustering are re-applied to a filtered subset of the data, and animation is used to smooth the transition between successive filtered views. As a case study we demonstrate the technique being used to analyse survey data relating to the appropriateness of different phrases to different emotionally charged situations.
Visualization of decision processes using a cognitive architecture
Mark A. Livingston, Arthi Murugesan, Derek Brock, et al.
Cognitive architectures are computational theories of reasoning the human mind engages in as it processes facts and experiences. A cognitive architecture uses declarative and procedural knowledge to represent mental constructs that are involved in decision making. Employing a model of behavioral and perceptual constraints derived from a set of one or more scenarios, the architecture reasons about the most likely consequence(s) of a sequence of events. Reasoning of any complexity and depth involving computational processes, however, is often opaque and challenging to comprehend. Arguably, for decision makers who may need to evaluate or question the results of autonomous reasoning, it would be useful to be able to inspect the steps involved in an interactive, graphical format. When a chain of evidence and constraint-based decision points can be visualized, it becomes easier to explore both how and why a scenario of interest will likely unfold in a particular way. In initial work on a scheme for visualizing cognitively-based decision processes, we focus on generating graphical representations of models run in the Polyscheme cognitive architecture. Our visualization algorithm operates on a modified version of Polyscheme's output, which is accomplished by augmenting models with a simple set of tags. We provide example visualizations and discuss properties of our technique that pose challenges for our representation goals. We conclude with a summary of feedback solicited from domain experts and practitioners in the field of cognitive modeling.
Vortex core timelines and ribbon summarizations: flow summarization over time and simulation ensembles
Alexis Y. L. Chan, Joohwi Lee, Russell M. Taylor II
We present two new vortex-summarization techniques designed to portray vortex motion over an entire simulation and over an ensemble of simulations in a single image. Linear “vortex core timelines” with cone glyphs summarize flow over all time steps of a single simulation, with color varying to indicate time. Simplified “ribbon summarizations” with hue nominally encoding ensemble membership and saturation encoding time enable direct visual comparison of the distribution of vortices in time and space for a set of simulations.
X3DBio2: A visual analysis tool for biomolecular structure comparison
Hong Yi, Sidharth Thakur, Latsavongsakda Sethaphong, et al.
A major problem in structural biology is the recognition of differences and similarities between related three dimensional (3D) biomolecular structures. Investigating these structure relationships is important not only for understanding of functional properties of biologically significant molecules, but also for development of new and improved materials based on naturally-occurring molecules. We developed a new visual analysis tool, X3DBio2, for 3D biomolecular structure comparison and analysis. The tool is designed for elucidation of structural effects of mutations in proteins and nucleic acids and for assessment of time dependent trajectories from molecular dynamics simulations. X3DBio2 is a freely downloadable open source software and provides tightly integrated features to perform many standard analysis and visual exploration tasks. We expect this tool can be applied to solve a variety of biological problems and illustrate the use of the tool on the example study of the differences and similarities between two proteins of the glycosyltransferase family 2 that synthesize polysaccharides oligomers. The size and conformational distances and retained core structural similarity of proteins SpsA to K4CP represent significant epochs in the evolution of inverting glycosyltransferases.
Improvement of web-based data acquisition and management system for GOSAT validation lidar data analysis
Hiroshi Okumura, Shoichiro Takubo, Takeru Kawasaki, et al.
A web-base data acquisition and management system for GOSAT (Greenhouse gases Observation SATellite) validation lidar data-analysis has been developed. The system consists of data acquisition sub-system (DAS) and data management sub-system (DMS). DAS written in Perl language acquires AMeDAS (Automated Meteorological Data Acquisition System) ground-level local meteorological data, GPS Radiosonde upper-air meteorological data, ground-level oxidant data, skyradiometer data, skyview camera images, meteorological satellite IR image data and GOSAT validation lidar data. DMS written in PHP language demonstrates satellite-pass date and all acquired data. In this article, we briefly describe some improvement for higher performance and higher data usability. GPS Radiosonde upper-air meteorological data and U.S. standard atmospheric model in DAS automatically calculate molecule number density profiles. Predicted ozone density prole images above Saga city are also calculated by using Meteorological Research Institute (MRI) chemistry-climate model version 2 for comparison to actual ozone DIAL data.
Optimizing threshold for extreme scale analysis
Robert Maynard, Kenneth Moreland, Utkarsh Atyachit, et al.
As the HPC community starts focusing its efforts towards exascale, it becomes clear that we are looking at machines with a billion way concurrency. Although parallel computing has been at the core of the performance gains achieved until now, scaling over 1,000 times the current concurrency can be challenging. As discussed in this paper, even the smallest memory access and synchronization overheads can cause major bottlenecks at this scale. As we develop new software and adapt existing algorithms for exascale, we need to be cognizant of such pitfalls. In this paper, we document our experience with optimizing a fairly common and parallelizable visualization algorithm, threshold of cells based on scalar values, for such highly concurrent architectures. Our experiments help us identify design patterns that can be generalized for other visualization algorithms as well. We discuss our implementation within the Dax toolkit, which is a framework for data analysis and visualization at extreme scale. The Dax toolkit employs the patterns discussed here within the framework’s scaffolding to make it easier for algorithm developers to write algorithms without having to worry about such scaling issues.
Perceptualization of geometry using intelligent haptic and visual sensing
Jianguang Weng, Hui Zhang
We present a set of paradigms for investigating geometric structures using haptic and visual sensing. Our principal test cases include smoothly embedded geometry shapes such as knotted curves embedded in 3D and knotted surfaces in 4D, that contain massive intersections when projected to one lower dimension. One can exploit a touch-responsive 3D interactive probe to haptically override this conflicting evidence in the rendered images, by forcing continuity in the haptic representation to emphasize the true topology. In our work, we exploited a predictive haptic guidance, a “computer-simulated hand” with supplementary force suggestion, to support intelligent exploration of geometry shapes that will smooth and maximize the probability of recognition. The cognitive load can be reduced further when enabling an attention-driven visual sensing during the haptic exploration. Our methods combine to reveal the full richness of the haptic exploration of geometric structures, and to overcome the limitations of traditional 4D visualization.
Review of chart recognition in document images
Yan Liu, Xiaoqing Lu, Yeyang Qin, et al.
As an effective information transmitting way, chart is widely used to represent scientific statistics datum in books, research papers, newspapers etc. Though textual information is still the major source of data, there has been an increasing trend of introducing graphs, pictures, and figures into the information pool. Text recognition techniques for documents have been accomplished using optical character recognition (OCR) software. Chart recognition techniques as a necessary supplement of OCR for document images are still an unsolved problem due to the great subjectiveness and variety of charts styles. This paper reviews the development process of chart recognition techniques in the past decades and presents the focuses of current researches. The whole process of chart recognition is presented systematically, which mainly includes three parts: chart segmentation, chart classification, and chart Interpretation. In each part, the latest research work is introduced. In the last, the paper concludes with a summary and promising future research direction.