Proceedings Volume 3298

Visual Data Exploration and Analysis V

cover
Proceedings Volume 3298

Visual Data Exploration and Analysis V

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 14 May 1998
Contents: 10 Sessions, 31 Papers, 0 Presentations
Conference: Photonics West '98 Electronic Imaging 1998
Volume Number: 3298

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Web-based Visualization
  • Virtual Reality and Sound
  • Multidimensional Visualization
  • Uncertainty Visualization
  • Time-based Visualization
  • Feature Extraction
  • Information Visualization and Artificial Intelligence
  • Case Studies
  • Rendering I
  • Rendering II
Web-based Visualization
icon_mobile_dropdown
Personal computer-based data visualization system for use in the World Wide Web environment
Philip C. Chen
A current personal computer has a high clock speed, large cache and disc memory, and it can run graphics and multimedia software systems that were originally designed for graphics workstations exclusively. The goal of the current research is to investigate a personal computer-based visualization system that has its own data visualization software, processing units, and data storage units. The architecture of this personal computer-based visualization system is similar to that of a visualization system using a graphics workstation. Since the personal computer used in the current visualization system is a laptop computer; it is highly portable, and with its network capability, it can be used for some realtime simulation and interactive applications wherever an internet access is available. The data visualization software system is AVS/Express, which can run on personal computers and workstations as well. In this paper, designed considerations of a personal computer-based visualization system will be examined. Performances of this system with a workstation-based visuali system will be compared. Integrated visualization and web techniques with tools such as VRML and JAVA for interactive and cooperative visualization practices will be explored. A visualization case study with data generated by a numerical simulation model will be presented. A live visualization session using a laptop personal computer will be demonstrated.
Simulation steering and interactive visualization over the World Wide Web
Jerome Burgaud, Stephane Dutilleul, Michel Grave
When users have to access remote services like heavy simulation or experimental facilities, they need highly interactive tools for results visualization or remote equipment steering. The complexity of these tools and their high application dependency requires them to be easily configurable for being well adapted to specific needs. Application Builders like AVS, IRIS Explorer, VTK and others are therefore well suited to fulfil this configurability requirement. However, usage of such programs requires specific computing configurations, in terms of operating systems and access rights for example, limiting the number of potential users. This paper presents how usage of Web technologies can help in solving some of these issues in remote simulation steering and visualization, keeping a high level of configurability and graphical interactions for the client.
Java, CORBA, and patterns in a distributed scientific visualization system
John Christopher Lakey, Samuel L. Espy, David Gould
Software engineering is currently undergoing a radical paradigm shift away from monolithic stovepipe applications which are strongly tied to a particular platform. Key enabling technologies, such as Java and the Common Object Request Broker Architecture (CORBA) allow construction of newer systems from distributed objects and components, providing services seamlessly integrated across multiple platforms. Another exciting trend in the software engineering discipline is the use of patterns. Simply put, a pattern is rule which relates a recurring problem and a software configuration which resolves that problem together in a given context. The use of design patterns, Java, and CORBA offer distinct advantages to visualization tool developers, particularly in light of the extreme demands visualization tools place on existing computing platforms. Potential benefits include: tools capable of using distributed computing resources and data repositories the ability to add new functionality and GUIs at runtime, and the ability to develop cross-platform tools without rewriting large functional units and user interfaces. In this paper, we describe our use of design patterns for the development of distributed, cross-platform visualization systems. The visualization systems currently under development are built with Java and C++ components connected via CORBA middleware.
Virtual Reality and Sound
icon_mobile_dropdown
Internet-oriented visualization with audio presentation of speech signals
Jerome J. Braun, Haim Levkowitz
Visualization of speech signals, including the capability to visualize the waveforms while simultaneously hearing the speech, is among the essential requirements in speech processing research. In tasks related to labeling of speech signals, visualization activities may have to be performed by multiple users upon a centralized collection of speech data. When speech labeling activities involve perceptual issues, the human factors issues including functionality tradeoffs are particularly important, since the user's burden (tiredness, annoyance) can affect the perceptual responses. We developed VideVox (pronounced 'Veedeh-Vox'), a speech visualization facility, in which the visualization activities may be performed by a large number of users in geographically, dialectically and linguistically diverse locations. Developed in Java, and capable of operating both as an Internet Java applet and a Java application, VideVox is platform independent. Using the client-server architecture paradigm, it allows distributed visualization work. The Internet orientation makes VideVox a promising direction for speech signal visualization in speech labeling activities that require a large number of users in multiple locations. In the paper, we describe our approach, VideVox features, modes of audio data exploration and audio-synchronous animation for speech visualization, operations related to identification of perceptual events, and the human factors issues related to perception-oriented visualizations of speech.
Analytical augmentation of 3D simulation environments
Julia J. Loughran, Marchelle M. Stahl
This paper describes an approach for augmenting three- dimensional (3D) virtual environments (VEs) with analytic information and multimedia annotations to enhance training and education applications. Analytic or symbolic information in VEs is presented as bar charts, text, graphical overlays, or with the use of color. Analytic results can be computed and displayed in the VE at run-time or, more likely, while replaying a simulation. These annotations would typically include computations of pre-defined Measures of Performance (MOPs) or Measures of Effectiveness (MOEs) associated with the training or educational goals of the simulation. Multimedia annotations are inserted into the VE by the user and may include: a drawing or whiteboarding capability, enabling participants to insert written text and/or graphics into the two-dimensional (2D) or 3D world; audio comments, and/or video recordings. These annotations can clarify a point, capture teacher feedback, or elaborate on the student's perspective or understanding of the experience. The annotations are captured in the VE either synchronously or asynchronously from the users (students and instructors), during simulation execution or afterward during a replay. When replaying or reviewing the simulation, the embedded annotations can be reviewed by a single user or by multiple users through the use of collaboration technologies. By augmenting 3D virtual environments with analytic and multimedia annotations, the education and training experience may be enhanced. The annotations can offer more effective feedback, enhance understanding, and increase participation. They may also support distance learning by promoting student/teacher interaction without co-location.
Interactions with sound parameters
Krishnan Seetharaman, Georges G. Grinstein, Stuart Smith, et al.
Traditionally, visualization systems have focused on the visual sense. However, with the advent of multimedia and virtual reality systems, other senses such as sound and touch are being slowly incorporated into systems. Even in the visual channel, the majority of systems depend on the perception of geometry through graphical concepts such as lines, fillareas, windows, and raster pixmaps to provide the visualization feedback. Sound is being effectively used in visualization systems and is increasingly being integrated into mainstream systems. However, we have not made much progress in developing a fundamental understanding of interaction in non-geometric representational spaces. We are interested in extending simple interactions, such as zoom and pan, into other domains such as sound. Pitch is a perceptual quantity of sound that is associated with the physical quantity frequency. We describe how zoom and pan operations in pitch are supported. Formal definitions for these operations are also provided. Finally, we describe a prototype system for such interactions.
Signal modulation approach to data sonification
Matti Groehn, Juha Backman, Aki Haermae
The human ear has a remarkable ability to detect temporal structures and patterns even in highly complex signals. Therefore, it is profitable to use the ear in data analysis especially in the cases where the amount of data is vast for any practical visualization technique. The main principle in this paper is to incorporate psychoacoustic knowledge into the system. For example, auditory frequency resolution, i.e., ERB- scale is used in mapping data values to control parameters of the system. A real time computer program with a graphical user interface where various parameters of the system can be controlled is developed. For example, the time scale is continuously adjustable and there are methods to focus on some aspects or fractions of the data. The results are promising and show that this approach has potential in sonification applications. The limitation of the modulation approach is the number of simultaneously detectable parameters.
Multidimensional Visualization
icon_mobile_dropdown
Visualization: a really generic approach or the art of mapping data to graphical objects
Joern Trilk, Frank Schuetz
Visualization is an important technology for analyzing large amounts of data. However, the process of creating meaningful visualizations is quite difficult. The success of this process depends heavily on a good mapping of objects present in the application domain to objects used in the graphical representation. Both kinds of objects possess several attributes. Whereas data objects have attributes of certain types (e.g. integers, strings) graphical objects are characterized by their appearance (shape, color, size, etc.). In our approach, the user may map arbitrarily data attributes to graphical attributes, leading to a great flexibility. In our opinion, this is the only possibility to achieve a really generic approach. To evaluate our ideas, we developed a tool called ProViS. This tool indicates the possible attributes of data objects as well as graphical objects. Depending on his goals, the user can then 'connect' (freely) attributes of data objects to attributes of their graphical counterparts. The structure behind the application objects can be worked out very easily with the help of various layout algorithms. In addition, we integrated several mechanisms (e.g. ghosting, hiding, grouping, fisheye views) to reduce complexity and to further enhance the three-dimensional visualization. In this paper, first of all we take a look at the basic principle of visualization: mapping data. Then we present, ProViS, a visualization tool implementing our idea of mapping.
Visiview: a system for the visualization of multidimensional data
Shaun Bangay
Results generated by simulation of computer systems are often presented as a multi-dimensional data set, where the number of dimensions may be greater than 4 if sufficient system parameters are modelled. This paper describes a visualization system intended to assist in understanding the relationship between, and effect upon system behavior of, the different values of the system parameters. The system is applied to data that cannot be represented using a mesh or isosurface representation, and in general can only be represented as a cloud of points. The use of stereoscopic rendering and rapid interaction with the data are compared with regard to their value in providing insight into the nature of the data. A number of techniques are implemented for displaying projections of the data set with up to 7 dimensions, and for allowing intuitive manipulation of the remaining dimensions. In this way the effect of changes in one variable in the presence of a number of others can be explored. The use of these techniques, when applied to data from computer system simulation, results in an intuitive understanding of the effects of the system parameters on system behavior.
Uncertainty Visualization
icon_mobile_dropdown
exVis: a visual analysis tool for wind tunnel data
D. Glenn Deardorff, Leslie E. Keeley, Samuel P. Uselton
exVis is a software tool created to support interactive display and analysis of data collected during wind tunnel experiments. It is a result of a continuing project to explore the uses of information technology in improving the effectiveness of aeronautical design professionals. The data analysis goals are accomplished by allowing aerodynamicists to display and query data collected by new data acquisition systems and to create traditional wind tunnel plots from this data by interactively interrogating these images. exVis was built as a collection of distinct modules to allow for rapid prototyping, to foster evolution of capabilities, and to facilitate object reuse within other applications being developed. It was implemented using C++ and Open Inventor, commercially available object-oriented tools. The initial version was composed of three main classes. Two of these modules are autonomous viewer objects intended to display the test images (ImageViewer) and the plots (GraphViewer). The third main class is the Application User Interface (AUI) which manages the passing of data and events between the viewers, as well as providing a user interface to certain features. User feedback was obtained on a regular basis, which allowed for quick revision cycles and appropriately enhanced feature sets. During the development process additional classes were added, including a color map editor and a data set manager. The ImageViewer module was substantially rewritten to add features and to use the data set manager. The use of an object-oriented design was successful in allowing rapid prototyping and easy feature addition.
Viewing angle uncertainty in volumetric visualization
Abigail Joseph, Suresh Kumar Lodha, Tao Starbow
Many direct volume rendering algorithms are routinely used to render volumetric data in scientific applications. Different algorithms, however, produce different results and may lead to different interpretations of the scientific data. There are many factors that contribute to different results including the rendering algorithm such as the ray tracing or projection method. Within the algorithm itself such as ray tracing, there are many factors such as the number of samples, desired opacity, sample location etc., that lead to different images. In some of these cases, the differences between the images are significant enough to demand further investigation. In this work we investigate the sensitivity of differences between images to the viewing angle. In other words, we employ different visualization methods and obtain different images for the same viewing angle. The dependence of these differences on the viewing angle is then investigated. These difference images are visualized by pasting them on six sides of a cube corresponding to six different viewing angles. These differences are also visualized by using glyphs on a sphere, where each point on a sphere corresponds to a viewing angle. For most viewing angles, these differences are not significant and therefore, in such cases, inexpensive visualization algorithms can be employed. In some cases, where the differences are large, our technique compels the user to incorporate uncertainty while drawing conclusions from those images. We also discuss extensions of this work to incorporate uncertainty in volumetric visualization corresponding to different choices of color mapping or opacity.
Waltz: an exploratory visualization tool for volume data using multiform abstract displays
Although, visualization is now widely used, misinterpretations still occur. There are three primary solutions intended to aid a user interpret data correctly. These are: displaying the data in different forms (Multiform visualization); simplifying (or abstracting) the structure of the viewed information; and linking objects and views together (allowing corresponding objects to be jointly manipulated and interrogated). These well-known visualization techniques, provide an emphasis towards the visualization display. We believe however that current visualization systems do not effectively utilise the display, for example, often placing it at the end of a long visualization process. Our visualization system, based on an adapted visualization model, allows a display method to be used throughout the visualization process, in which the user operates a 'Display (correlate) and Refine' visualization cycle. This display integration provides a useful exploration environment, where objects and views may be directly manipulated; a set of 'portions of interest' can be selected to generate a specialized dataset. This may subsequently be further displayed, manipulated and filtered.
Evaluating the quality of scientific visualizations: the Q-VIS reference model
Helmut Haase
Scientific Visualizations are important to scientists and engineers in many fields, but also to managers and to the general public. In order to achieve good results there have to be means to evaluate the quality of visualizations and to compare visualizations to each other. In this paper, after a short introduction and an overview of some related work, the notion of a 'visualization background' is introduced. It includes the prior knowledge of the user; the aims of the user; the application domain; amount, structure, and distribution of the data; and the available hardware and software. Next, the problem of quantifying visualization quality is discussed. Then, six subqualities are presented, namely data resolution quality, semantic quality, mapping quality, image quality, presentation and interaction quality, and user quality. The reference model defines visualization quality as six pairs of two values each: for each of the six subqualities, a weight value C (representing the importance of the subquality for the visualization background) and a subquality value Q (a measure of how well the visualization meets the requirements of the visualization background in this subquality) are given. Finally, the Q-VIS graph is introduced which offers a compact, easy to perceive representation of this visualization quality. Thus, a tool for evaluating and comparing visualizations and visualization systems is presented which can help to achieve better visualizations in the future.
Time-based Visualization
icon_mobile_dropdown
Multiview Reductive Decomposition
Justin D. Pearlman, Zimri Yaseen
Introduction: Data analysis for diagnostic purposes is considered from the standpoint of applying visualization to medical problem solving systems. We focus on efficacy and efficiency of decision-making based on visualization of reductive decompositions of time series image data. Methods: A multiview reductive decomposition presentation is a collection of reduced cardinality subsets of transformed data the results from decisions based on original data directly to optimal decisions are equivalent to those from the decomposition presentations and from a reconstructed approximation of the original data based on the presentations. Results: Three classes of decomposition are evaluated: interactive dynamic, fixed dynamic, and static. Dynamic change with time. Fixed dynamic presentations are suitable for videotape, while interactive require a computer. Methods for design and evaluation of novel presentations, and equations for analysis of error propagation are presented. Conclusions: Computed decomposition and presentation of time-series data offers substantive reductions in the expertise and time required to understand complex data sets. Visualization is intrinsic to the method, and is also useful for comparing different decompositions. The interdependence of non- orthogonal decompositions provides context and improves confidence tracking. Performance is enhanced by tailoring data visualization to the requirements of the problem-solving system.
Visualizing artifacts, meta-information, and quality parameters of image sequences
Peter Uray, Heimo Mueller-Seelich, Walter Plaschzug, et al.
This paper presents visualization methods for film quality parameters which are used in the course of semi-automatic film resaturation. A central part is navigation in the time context by visualizing the temporal film structure. So-called 'time sections' take characteristic features (e.g. one pixel line or one column, motion information) from each image and map them to a column of a time section image. Typical dimensions for a time section image for a 100 minute movie are 500 by 150,000 pixels, where each image of the original sequence is represented by one column (500 by 1) of the time section image. As the width of such an image is too large for displaying it in one piece on a computer monitor, a non-linear time scale is introduced. This allows for displaying the content of an interesting shot in full detail while other shot are shown in a compressed view. The time line of a time section can be regarded as an array of 'temporal hyperlinks' modeling the temporal structure of a movie. The smallest temporal entity of annotation is given by shots (a continuous sequence of images) which can be combined hierarchically to scenes, acts, etc. or grouped by certain characteristics (e.g. artefact class). In addition, special quality parameters can be assigned to temporal entities such as shots, scenes and groups. These parameters can be visualized by icons that indicate quality on the non-linear timeline. Application examples for quality icons of each defect class are given, and the visual quality representation used for the restoration of a full length movie is presented.
Multiplexed space-time maps for time series data visualization: application to 4D cardiac imaging
Justin D. Pearlman, Zimri Yaseen
Introduction: 4D data (time-series images) inundate the viewer, so conventional 2D slice image review for decision making verges on impracticality. A multiplexed composite presentation was developed to facilitate review of the 4D cardiac data for diagnostics. Methods: Data were collected by MRI as time-series: 4D data (beating volume), and bolus transit contrast studies. Space-time maps were produced by extracting heart muscle in short axis views, remapping it from polar to Cartesian, so that the annular muscle formed a vertical strip, tiled into a 2D image in which vertical distance represents distance around the myocardium, and horizontal distance represents time. Results: Space-time maps enabled instant recognition and rapid measurement of size and timing of abnormalities, validated by microsphere distributions (r equals 0.86), ex vivo CT imaging (r equals 0.95), and correspondence to treatment effect (p less than 0.01). Multiplexed space-time maps enabled rapid recognition and rapid measurement of both the spatial extent and timing of transient changes. These maps summarize a massive amount of time-varying data in static images. Conclusion: Multiplexing obviates the need for accurate edge detection, and facilitates observer evaluation of data confidence and error propagation. Diagnostic information extraction and quantification is accelerated markedly with improved reproducibility and accuracy. The method is very noise tolerant.
Feature Extraction
icon_mobile_dropdown
Visualization of cluster hierarchies
Bjoern Heckel, Bernd Hamann
Clustering is a powerful analysis technique used to detect structures in data sets. The output of a clustering process can be very large. However, if presented in a textual form, the amount of information that can be understood is limited. An alternative approach is to display the data in a graphical way. An advantage of visualization is that a larger amount of information can be perceived. Supporting user interaction and manipulation of the object space enables exploration of large data sets. We present a technique for the visualization of cluster hierarchies. The input to our technique is a finite set of n-dimensional points. All points are initially placed in one cluster, which is recursively split, creating a hierarchy of clusters. Principal component analysis is used to determine how to optimally bisect a cluster. After splitting a cluster, a local reclassification scheme based on the Gabriel graph is applied to improve the quality of the classification. As a byproduct of the generation of the cluster hierarchy, we compute and store the eigendirections, eigenvalues and local centers for each cluster at each level of the hierarchy. For the visualization of the cluster hierarchy, the user has to specify the three dimensions that are used for the rendering process. The local coordinate systems (centroids, eigenvalues, and eigendirections) of each cluster induce a local metric that can be utilized to define 'density functions.' These functions describe hyperellipsoids that we render in two different ways: (1) we generate a set of (transparent) contour surfaces (each cluster would appear as a translucent surface) or (2) we apply 'ray casting' to simulate the behavior of X-rays penetrating the density fields implied by the clusters. In order to show the different levels in the hierarchy, one can either render only those clusters belonging to the same level or, alternatively, use transparency to abe able to see through the 'outer shell' of a cluster and see the finer, more detailed structures inside.
Match score image: a visualization tool for image query refinement
Lawrence D. Bergman, Vittorio Castelli
We present a visualization technique designed to facilitate iterative refinement of content-based image queries, particularly example-based specification. The technique operates on scores produced by region-based matching algorithms, including texture matching and template matching. By mapping match scores to color, then compositing with the original image, we provide the user with the 'goodness' of match for each region and simultaneously with the original image information. There are several ways in which the match score image can be used to enhance the query refinement process including: facilitating the selection of both positive and negative examples, guiding the selection of thresholds, and enabling exploration of the effect of other parameter values on match algorithm performance. The usability of this visualization technique is highly dependent on choice of score-to-color mapping parameters including continuous vs. discrete, hue range, saturation, lightness, and transparency. We provide some heuristics for selecting these values. Although usable for photographic images, the match score image is particularly useful in application domains such as remote sensing and medical imaging, where particular subregions of large images are sought, rather than entire images.
Three-dimensional active net for volume extraction
Ikuko Takanashi, Shigeru Muraki, Akio Doi, et al.
3D Active Net, which is a 3D extension of Snakes, is an energy-minimizing surface model which can extract a volume of interest from 3D volume data. It is deformable and evolves in 3D space to be attracted to salient features, according to its internal and image energy. The net can be fitted to the contour of a target object by defining the image energy suitable for the contour property. We present testing results of the extraction of a muscle from the Visible Human Data by two methods: manual segmentation and the application of 3D Active Net. We apply principal component analysis, which utilizes the color information of the 3D volume data to emphasize an ill-defined contour of the muscle, and then apply 3D Active Net. We recognize that the extracted object has a smooth and natural contour in contrast with a comparable manual segmentation, proving an advantage of our approach.
Information Visualization and Artificial Intelligence
icon_mobile_dropdown
What good is visualization: three experiments
Robert R. Korfhage, David S. Dubin, Edward M. Housman
Three experiments demonstrate capabilities of the VIBE information retrieval interface. The first explores the identification of reference points sets to spread a collection of displayed documents into small clusters. The second demonstrates using visual representation of a document set to determine the structure of a document collection vis-a-vis a given reference point set when all semantic information has been hidden from the user. In the third experiment VIBE, in conjunction with genetic algorithm techniques, refined the definition of a POI (reference or query point), improving precision and recall.
Data visualization using automatic perceptually motivated shapes
Christopher D. Shaw, David S. Ebert, James M. Kukla, et al.
This paper describes a new technique for the multi-dimensional visualization of data through automatic procedural generation of glyph shapes based on mathematical functions. Our glyph- based Stereoscopic Field Analyzer (SFA) system allows the visualization of both regular and irregular grids of volumetric data. SFA uses a glyph's location, 3D size, color and opacity to encode up to 8 attributes of scalar data per glyph. We have extended SFA's capabilities to explore shape variation as a visualization attribute. We opted for a procedural approach, which allows flexibility, data abstraction, and freedom from specification of detailed shapes. Superquadrics are a natural choice to satisfy our goal of automatic and comprehensible mapping of data to shape. For our initial implementation we have chosen superellipses. We parameterize superquadrics to allow continuous control over the 'roundness' or 'pointiness' of the shape in the two major planes which intersect to form the shape, allowing a very simple, intuitive, abstract schema of shape specification.
Conception and realization of a living map
Gilles Taladoire, Didier Lille
Our objective is to obtain a 'living map' in which the traditional elements of a map are not static but can evolve in real time and interact. We therefore developed a system with a graphic interface allowing simulations to be constructed and both natural and man-induced phenomena to be monitored in real time. For this system, we coupled an expert system and an image processing system specialized in Remote Sensing. These two components accept many kind of data (aerial photographs, satellite pictures, maps, digital elevation models, . . .) and interact in real time. The system is built with four levels and uses an object methodology. It works according to the semantic wealth of the handled objects. The hierarchy of handled graphic objects essentially includes the geo- referenced objects and the 'maps' objects. Our prototype uses the object-oriented expert system of Gensym 'G2' and the graphic visualization system developed by Latical, 'Visiter.' An example will be presented.
Case Studies
icon_mobile_dropdown
High-resolution atheroma mixture-modeled MRI
Justin D. Pearlman, Vivek V. Sukhatme
Introduction: In chemically sensitive MRI, similar signals from the atheroma that blocks arteries and from perivascular fat, due to partial volume effects, can result in a mixed signal. Methods: MRI of pure samples and of blood vessels containing 10 different ratios of atheroma and perivascular fat were acquired by inversion recovery MRI. Mixture modeling deduced component signals, and distribution analysis enabled conversion of dynamic range to enhanced resolution. Imaging was repeated at high resolution (long acquisition) for validation of resolution enhancement. Monte Carlo methods were applied to examine error propagation. Quantitation of atheroma content corresponds accurately to measured samples (r equals .997). Results: Mixture modeling correctly identified components with rms error less than 5%. Multiresolution imaging confirmed that redistribution converts dynamic range to enhanced spatial resolution. The resolution enhancement agrees well with direct high-resolution acquisition, achieving 256x improvement in effective in-plane resolution. Conclusion. Mixture modeling correctly identifies atheroma vs. perivascular fat signal components, and resolution-enhancement converts the results to high spatial resolution maps of the components. The resolution-enhanced images agree well with the true high-resolution acquisition. They are faster to acquire, more practical, and provide better tissue characterization, including quantitation of the atheroma lipid burden in the vessel.
Fly wing asymmetry: a case study in visualization
John W. Buchanan, Paul A. Ferry, Grant McIntyre, et al.
There are many visualization systems available to the scientific community. Unfortunately the use of such systems is not as wide spread as we would like. The visualization of a scientists's data involves expertise from the scientist and the visualizing expert. In this paper we document the interaction between a scientist and a team of graphics people. We discuss why standard visualization systems were not used and we present our prototype system for fly-wing asymmetry visualization. In Biology organismal symmetry or lack thereof is being used as a measure of the quality of life forms. In this paper we present a system that was designed to facilitate the analysis of fly-wing data. The data used in this visualization was collected to test the hypothesis that old mothers produce lower quality offspring than young mothers. Thirteen landmarks at wing vein intersections were digitized three times on each wing and analyzed for asymmetry. The system that we present here complements the statistical analysis tools that are used for the formal analysis. In particular, our system has helped the scientist find outliers and gain an intuition for the data that has helped him decide which statistical analysis to perform.
Visualization of geographically referenced observations to support real-time decision making
Patrick E. Mantey
Graphical presentation of data to support decision making has a long history, going back to the earliest use of maps and charts. Continued advances in technology have enabled development of a succession of powerful tools to support decision-making, providing visualization of geographically- referenced observations. Early geographic information systems (GIS) were succeeded by powerful (and affordable) systems. Military command and control systems, and air traffic control systems, demonstrated the value of visualization of real-time data in time-critical decision-making. Decision support systems today combine the functions of a GIS and aspects of command and control systems in a powerful and affordable context for real-time decision-making. The REINAS system is presented as a prototype of such a real-time decision-support systems exploiting real-time geographically-referenced measurements.
Rendering I
icon_mobile_dropdown
Irregular grid volume rendering with composition networks
Volumetric irregular grids are the next frontier to conquer in interactive 3D graphics. Visualization algorithms for rectilinear 2563 data volumes have been optimized to achieve one frame/second to 15 frames/second depending on the workstation. With equivalent computational resources, irregular grids with millions of cells may take minutes to render for a new viewpoint. The state of the art for graphics rendering, PixelFlow, provides screen and object space parallelism for polygonal rendering. Unfortunately volume rendering of irregular data is at odds with the sort last architecture. I investigate parallel algorithms for direct volume rendering on PixelFlow that generalize to other compositing architectures. Experiments are performed on the Nasa Langley fighter dataset, using the projected tetrahedra approach of Shirley and Tuchman. Tetrahedral sorting is done by the circumscribing sphere approach of Cignoni et al. Key approaches include sort-first on sort-last, world space subdivision by clipping, rearrangeable linear compositing for any view angle, and static load balancing. The new world space subdivision by clipping provides for efficient and correct rendering of unstructured data by using object space clipping planes. Research results include performance estimates on PixelFlow for irregular grid volume rendering. PixelFlow is estimated to achieve 30 frames/second on irregular grids of 300,00 tetrahedra or 10 million tetrahedra per second.
Real-time volume visualization of sensor data for target detection and classification
Robert A. Cross
The Advanced Volume Visualization Display (AVVD) research program is a joint research program between the Fraunhofer Center for Research in Computer Graphics, Inc. and Innovative Research and Development Corp. It is dedicated to the real- time visualization of high-resolution volumetric sensor data sets, maximizing the use of the human visual system to facilitate detection and classification in extremely hostile environments. The AVVD program has successfully demonstrated the application of high-speed volume visualization to a number of detection and classification problems. Recent emphasis has been on sonar for undersea imaging using data from the Naval Undersea Warfare Center -- Division Newport's High Resolution Array (HRA), and rapid mine detection using data from the Coastal System Station's Toroidal Volume Search Sonar (TVSS). The AVVD system introduced a new capability: the intuitive composition of several 'pings' into a synthetic volumetric set. This composite data is higher resolution, approaching optical quality, with soft shadows and broad specularities.
Rendering of wavelet-decomposed volume data using adaptive block-wise reconstruction
Xuedong Yang, Mohammad H. Ghavamnia
In some scientific visualization applications, very high resolution 3D data can be encountered, where the data size significantly exceeds the physical size of memory. Thus, space efficiency is an important issue in volume rendering. In addition, with the rapid development of network technology, many visualization applications involve remote access to data. Therefore, efficient transmission of volume data is also an important concern. This paper presents a method of directly rendering wavelet compressed volume data, which reduces the memory requirements, while keeps the voxel reconstruction (on- the-fly) overhead low. The volume data was decomposed by a wavelet transformation. The voxel values are reconstructed on- the-fly during the rendering process. In contrast to the point-wise reconstruction techniques that contain implicit reconstruction redundancy, an efficient block-wise reconstruction method is proposed in this paper. A significant improvement in computational performance is achieved by using a cache algorithm to temporarily retain the reconstructed blocks to be used by the adjacent rays. Further acceleration is achieved by an adaptive voxel reconstruction method, which is equivalent to the popular octree acceleration technique, but which is fully implemented within the frame work of the wavelet transformation.
Rendering II
icon_mobile_dropdown
Data-dependent optimizations for permutation volume rendering
Craig M. Wittenbrink, Kwansik Kim
We have developed a highly efficient, high fidelity approach for parallel volume rendering that is called permutation warping. Permutation warping may use any one pass filter kernel, an example of which is trilinear reconstruction, an advantage over the shear warp approach. This work discusses experiments in improving permutation warping using data dependent optimizations to make it more competitive in speed with the shear warp algorithm. We use a linear octree on each processor for collapsing homogeneous regions and eliminating empty space. Static load balancing is also used to redistribute nodes from a processor's octree to achieve higher efficiencies. In studies on a 16384 processor MasPar MP-2, we have measured improvements of 3 to 5 times over our previous results. Run times are 73 milliseconds, 29 Mvoxels/second, or 14 frames/second for 1283 volumes, the fastest MasPar volume rendering numbers in the literature. Run times are 427 milliseconds, 39 Mvoxels/second, or 2 frames/second for 2563 volumes. The performance numbers show that coherency adaptations are effective for permutation warping. Because permutation warping has good scalability characteristics, it proves to be a superior approach for massively parallel computers when image fidelity is a required feature. We have provided further evidence for the utility of permutation warping as a scalable, high fidelity, and high performance approach to parallel volume visualization.
Distributing a GIS using a parallel data approach
John R. Monde, Michael Wild
The limitations of serial processors for managing large computationally intensive dataset problems in fields such as visualization and Geographical Information Systems (GIS) are well known. Parallel processing techniques, where one or many computational tasks are distributed across a number of processing elements, have been proposed as a solution to the problem. We describe a model for visualizing oceanographic data that extends an earlier technique of using data parallel algorithms on a dedicated parallel computer to an object- oriented distributed visualization system that forms a virtual parallel machine on a network computers. This paper presents a visualization model being developed by the University of Southern Mississippi demonstrating interactive visualization of oceanographic data. The test case involves visualization of two and three-dimensional oceanographic data (salinity, sound speed profile, currents, temperature, and depth) with Windows NT Pentium class computers serving as both severs and client workstations.
Volumetric visualization algorithm development for an FPGA-based custom computing machine
Sami J. Sallinen, Jyrki Alakuijala, Hannu Helminen, et al.
Rendering volumetric medical images is a burdensome computational task for contemporary computers due to the large size of the data sets. Custom designed reconfigurable hardware could considerably speed up volume visualization if an algorithm suitable for the platform is used. We present an algorithm and speedup techniques for visualizing volumetric medical CT and MR images with a custom-computing machine based on a Field Programmable Gate Array (FPGA). We also present simulated performance results of the proposed algorithm calculated with a software implementation running on a desktop PC. Our algorithm is capable of generating perspective projection renderings of single and multiple isosurfaces with transparency, simulated X-ray images, and Maximum Intensity Projections (MIP). Although more speedup techniques exist for parallel projection than for perspective projection, we have constrained ourselves to perspective viewing, because of its importance in the field of radiotherapy. The algorithm we have developed is based on ray casting, and the rendering is sped up by three different methods: shading speedup by gradient precalculation, a new generalized version of Ray-Acceleration by Distance Coding (RADC), and background ray elimination by speculative ray selection.