Proceedings Volume 5009

Visualization and Data Analysis 2003

Robert F. Erbacher, Philip C. Chen, Jonathan C. Roberts, et al.
cover
Proceedings Volume 5009

Visualization and Data Analysis 2003

Robert F. Erbacher, Philip C. Chen, Jonathan C. Roberts, et al.
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 9 June 2003
Contents: 11 Sessions, 33 Papers, 0 Presentations
Conference: Electronic Imaging 2003 2003
Volume Number: 5009

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Applications
  • Biomedical Visualiztion
  • Algorithms I
  • Visualization Techniques
  • Volume Visualization
  • Internet and Web Visualizations
  • Algorithms II
  • Interaction
  • Large Scale Data Visualization
  • Scientific Visualization
  • Poster Session
Applications
icon_mobile_dropdown
Eigenskies: a method of visualizing weather prediction data
Bjorn Olsson, Anders Ynnerman, Reiner Lenz
Visualizing a weather prediction data set by actually synthesizing an image of the sky is a difficult problem. In this paper we present a method for synthesizing realistic sky images from weather prediction and climate prediction data. Images of the sky are combined with a number of weather parameters (like pressure and temperature) to train an artificial neural network (ANN) to predict the appearance of the sky from certain weather parameters. Hourly measurements from a period of eight months are used. The principal component analysis (PCA) method is used to decompose images of the sky into their eigen components -- the eigenskies. In this way the image information is compressed into a small number of coefficients while still preserving the main information in the image. This means that the fine details of the cloud cover cannot be synthesized using this method. The PCA coefficients together with measured weather parameters at the same time form a data point that is used to train the ANN. The results show that the method gives adequate results and although some discrepancies exist, the main appearance is correct. It is possible to distinguish between different types of weather. A rainy day looks rainy and a sunny day looks sunny.
Integration and utilization of different visualization methods and devices in a structure-based drug design process
Matti T. Grohn, Tommi N. Nyronen
Molecular visualization techniques are used in various stages in the computer-aided drug design process. Here, we report on utilization of a four wall immersive virtual room in the visualization of protein - drug complexes. The advantage of a virtual room compared to desktop graphics is that it provides a high-resolution large field of view, helping the observers of the visualization to dissect the individual important structural features more easily than while using only conventional molecular graphics visualizations.
Three-dimensional laser scanning and reconstruction of ear canal impressions for optimal design of hearing aid shells
Gabriella Tognola, Marta Parazzini, Cesare Svelto, et al.
The hearing aid shell (or earmold) couples the hearing aid with the user ear. Proper fitting of the earmold to the subject ear canal is required to achieve satisfactory wearing comfort, reduction in acoustic feedback, and unwanted changes in the electroacoustic characteristics of the aid. To date, the hearing aid shell manufacturing process is fully manual: the shell is fabricated as a replica of the impression of the subject ear canal. The typical post-impression processes made on the ear impression modify the physical dimensions and the shape of the final shell thus affecting the overall performance of the hearing aid. In the proposed approach, the surface of the original ear impression is 3D laser scanned by a prototype equipment consisting of a pair of CCD cameras and a commercial He-Ne laser. The digitized surface is reconstructed by means of iterative deformations of a geometrical model of simple and regular shape. The triangular mesh thus obtained is smoothed by a non-shrinking low-pass spatial filter. With this approach, post-impression processes are no more needed because the digitally reconstructed impression can be directly fed to rapid prototyping equipments, thus achieving a better accuracy in obtaining an exact replica of the ear impression. Furthermore, digital reconstruction of the impression allows for simple and reliable storage and transmission of the model without handling a physical object.
Biomedical Visualiztion
icon_mobile_dropdown
Lesion identification from scintimammography breast images
Niranjan Tallapally, Ramakrishnan Sundaram, Leonard R. Coover M.D.
The identification and localization of lesions in scintimammography breast images is a crucial stage in the early detection of cancer. Scintimammography breast images are obtained using a small, high-resolution breast-specific Gamma Camera (e.g. LumaGEMTM Gamma Ray Camera, Gamma Medica Instruments, Northridge, CA). The resulting images contain information about possible lesions but they are very noisy. This requires a robust image segmentation algorithm to accurately contour the region should it exist. The algorithm must perform robust localization, minimize the misclassifications, and lead to efficient practical implemetations despite the influence of blurring and the presence of noise. This paper discusses and implements a robust spatial domain algorithm known as the Otsu algorithm for automatic selection of threshold level from the image histogram and to detect and contour objects/regions in grayscale digital images. Specifically, this paper develops the algorithm that is used to identify cancerous lesions in breast images. There are two primary objectives of this paper. First, to design and implement a contour detection algorithm suitable for the constraints posed by scintimammography breast images, and secondly, to provide the physician with a Graphical User Interface (GUI) which facilitates the visualization and classification of the images.
Visualizing human fatigue at joint level with the half-joint pair concept
Inmaculada Rodriguez, Ronan Boulic, Daniel Meziat
We present a model to predict and represent human fatigue in a 3D interactive system. A fatigue model has been developed for the fatigue assessment of several joints of the human body within the static case hypothesis. The model incorporates normalized torques, joint strength and maximum holding time as parameters. Fatigue evolution is predicted taking into account how these variables evolve over time. The fatigue model is embedded within an Inverse Kinematics engine that tries to achieve user-defined goals. During the animation, the predicted fatigue level is given to the graphical system in order to visualize it around its associated joint. The current fatigue value is exploited by the fatigue model to perform a new iteration towards the goal. The traditional joint model is broken down into two half-joints that better represents the anatomic organization of motion production through two independent muscle groups. Based on this organization, we can calculate and visualize independent fatigue variables for each antagonist muscle group. This type of visualization gives an intuitive and clear feedback. Each half-joint maintains its own fatigue model and variable. The two fatigue variables are represented by means of dynamic semicircles. Visual guides as semicircle’s length and gradual color indicates fatigue evolution along time.
Tool for metabolic and regulatory pathways visual analysis
The research activity in bio-informatics has now reached a new phase, called post-genomics. It aims at the description of gene products as part of global processes in cells. In this research area, the various tasks to be conducted by biologists call for methods inspired from knowledge extraction and representation, and from information visualization. We describe a system devoted to the visualization of metabolic pathways. This set of biological reactions describe product transformations in the cell. The analysis of various pathway visualization tools led us to qualitative assertions. First, it is essential that the visualization environment preserves the drawing conventions borrowed from biology. Second, it seems important to offer an environment in which the user can navigate while preserving cognitive continuity. Our system focuses on these interactive and navigational issues. It offers mechanisms such as interactive color mapping and semantic zooming of pathways through various levels of details. Our tool also aims at helping biologists in the analysis of experimental results measuring gene expression in various biological processes. Although our efforts have focused on the visualization of metabolic pathways, our system should help to visualize, analyze and discover other types of biological pathways (e.g. regulatory pathways).
Skeleton-based myocardium segmentation
Andre Neubauer, Rainer Wegenkittl
Computer-aided analysis of four-dimensional tomography data plays an increasingly vital role in the field of diagnosis and treatment of heart function deficiencies. A key task for understanding the dynamics involved within a recorded cardiac cycle is to segment the acquired data to identify objects of interest, like the heart muscle (or myocardium) and the left ventricle. In this paper, a new robust and fast semi-automatic algorithm for segmentation of the myocardium from a CT data set is presented. The user marks the myocardium by placing a poly-line on one slice of the data volume. This poly-line forms a skeleton representing the cross-section of the myocardium on this slice. The skeleton is then automatically propagated and adjusted to the other slices in order to create a three-dimensional skeleton of the entire heart muscle. Then each voxel is assigned a value which denotes the voxel's connectivity to the skeleton. The boundaries of the myocardium can then be extracted as an iso-surface in the volume of connectivity values.
Algorithms I
icon_mobile_dropdown
Approximation of time-varying multiresolution data using error-based temporal-spatial reuse
Christof Nuber, Eric C. LaMar, Bernd Hamann, et al.
We extend the notion of multi-resolution spatial data approximation of static datasets to spatio-temporal approximation of time-varying datasets. By including the temporal dimension, we allow a region of one time-step to approximate a congruent region at another time-step. Approximations of static datasets are generated by refining an approximation until a given error-bound is met. To approximate time-varying datasets we use data from another time-step when that data meets a given error-bound for the current time-step. Our technique exploits the fact that time-varying datasets typically do not change uniformly over time. By loading data from rapidly changing regions only, less data needs to be loaded to generate an approximation. Regions that hardly change are not loaded and are approximated by regions from another time-step. Typically, common techniques only permit binary classification between consecutive time-steps. Our technique allows a run-time error-criterion to be used between non-temporally consecutive time-steps. The errors between time-steps are calculated in a pre-processing step and stored in error-tables. These error-tables are used to calculate errors at run-time, thus no data needs to be accessed.
Normalized-cut algorithm for hierarchical vector field data segmentation
Jiann-Liang Chen, Zhaojun Bai, Bernd Hamann, et al.
In the context of vector field data visualization, it is often desirable to construct a hierarchical data representation. One possibility to construct a hierarchy is based on clustering vectors using certain similarity criteria. We combine two fundamental approaches to cluster vectors and construct hierarchical vector field representations. For clustering, a locally constructed linear least-squares approximation is incorporated into a similarity measure that considers both Euclidean distance between point pairs (for which dependent vector data are given) and difference in vector values. A modified normalized cut (NC) method is used to obtain a near-optimal clustering of a given discrete vector field data set. To obtain a hierarchical representation, the NC method is applied recursively after the construction of coarse-level clusters. We have applied our NC-based segmentation method to simple, analytically defined vector fields as well as discrete vector field data generated by turbulent flow simulation. Our test results indicate that our proposed adaptation of the original NC method is a promising method as it leads to segmentation results that capture the qualitative and topological nature of vector field data.
Digital image acquisition and continuous zoom display from multiresolution views using heterogeneous image pyramids
There are many ways of capturing images to represent a detailed scene. Our motivation is to use inexpensive digital cameras with little setup requirements and to allow photographers to differentially capture both low-resolution overviews and high-resolution details. We present the heterogeneous image pyramid as a non-uniform representation composed of multiple captured multi-resolution images. Each resolution image captures a specific portion of the scene at the photographer’s discretion with the desired resolution. These images are highly correlated since they are captured from the same scene. Consequently, these images can be registered and represented more compactly in a 3-dimensional spatial image pyramid called the heterogeneous image pyramid.
Visualization Techniques
icon_mobile_dropdown
Real-time view-dependent extraction of isosurfaces from adaptively refined octrees and tetrahedral meshes
David C. Fang, Jevan T. Gray, Bernd Hamann, et al.
We describe an adaptive isosurface visualization scheme designed for perspective rendering, taking into account the center of interest and other viewer-specified parameters. To hierarchically decompose a given data domain into a multiresolution data structure, we implement and compare two spatial subdivision data structures: an octree and a recursively defined tetrahedral mesh based on longest-edge bisection. We have implemented our view-dependent isosurface visualization approach using both data structures -- comparing storage requirements, computational efficiency, and visual quality.
Texture analysis and scientific visualization
Sebastien Mavromatis, Jean-Marc Boï
In this paper, we propose a new formalism that enables to take into account image textural features in a very robust and selective way. This approach also permits to visualize these features so experts can efficiently supervise an image segmentation process based on texture analysis. The texture concept has been studied through different approaches. One of them is based on the notion of ordered local extrema and is very promising. Unfortunately, this approach does not take in charge texture directionality; and the mathematical morphology formalism, on which it is based, does not enable extensions to this feature. This led us to design a new formalism for texture representation which is able to include directionality features. It produces a representation of texture relevant features in the form of a surface z = f(x,y). The visualization of this surface gives experts sufficient information to discriminate different textures.
Paper landscapes: a visualization design methodology
Paper landscape refers to both an iterative design process and a document as an aid to the design and development process for creating new information visualizations. A paper landscape engages all stakeholders early in the process of creating new visualizations and is used to solicit input; clarify ideas, features, requirements, tasks; and obtain support for the proposal, whether group consensus, market validation or project funding.
Volume Visualization
icon_mobile_dropdown
Quantitative image-level evaluation of multiresolution 3D texture-based volume rendering
Kim M. Edlund, Thomas P. Caudell
This research focuses on a quantitative evaluation of images produced by multi-resolution 3D texture-based volume rendering methods. Volume rendering techniques utilize nearly all the data in a volumetric data set to construct an image, so using coarser versions of the original data may negatively impact the display quality of the images produced. The trade-offs between a more efficient use of memory space needed to store a multi-resolution representation versus the potential sacrifice of image quality are characterized by visual inspection and by two image quality measurements: root mean square error (RMSE) and normalized mutual information (NMI). RMSE is a traditional image quality measurement and NMI is a recent technique used in image processing and human vision research that incorporates image entropies into a concise, intuitive information-based measurement to quantify information content. Using image entropy as a measure of information can help determine if there is some kind of structural artifact in the image, so it may compliment RMSE, which is often used to identify random error. The analysis of images produced from multi-resolution volume rendering experiments indicates that there is additional merit in looking at information-based measurements of image quality as well as using traditional measurements to identify and quantitatively evaluate regions of mismatch.
Accelerated isosurface polygonization for dynamic volume data using programmable graphics hardware
Makoto Matsumura, Ken Anjyo
We present a novel approach to accelerating isosurface polygonization by exploiting the power of graphics processor unit (GPU), along with the Marching Cube (MC) algorithm. Efficient use of GPU in our approach allows us to accelerate cube index computation for the entire cubes of input data set, which is the dominant process in MC. Our approach is quite unique in that it generates information for geometric construction of the data set in pixel shading hardware. The techniques in our approach are similar to those in a multipass method in that the multiple rendering passes are used to render the final image. Then the result of previous pass is fed into geometry pipeline in our approach, whereas it is fed into an image pipeline in the multipass method. Our experimental results illustrate very well that our approach has many applications for realtime generated volume data such as metaballs and numerical simulation results, without imposing any conditions on the input data set.
Internet and Web Visualizations
icon_mobile_dropdown
Visualizing multiattribute Web transactions using a freeze technique
Ming C. Hao, Daniel Cotting, Umeshwar Dayal, et al.
Web transactions are multidimensional and have a number of attributes: client, URL, response times, and numbers of messages. One of the key questions is how to simultaneously lay out in a graph the multiple relationships, such as the relationships between the web client response times and URLs in a web access application. In this paper, we describe a freeze technique to enhance a physics-based visualization system for web transactions. The idea is to freeze one set of objects before laying out the next set of objects during the construction of the graph. As a result, we substantially reduce the force computation time. This technique consists of three steps: automated classification, a freeze operation, and a graph layout. These three steps are iterated until the final graph is generated. This iterated-freeze technique has been prototyped in several e-service applications at Hewlett Packard Laboratories. It has been used to visually analyze large volumes of service and sales transactions at online web sites.
Analysis and application of node layout algorithms for intrusion detection
Robert F. Erbacher, Zhouxuan Teng
The need to monitor today's networked computer systems for security purposes is a major concern. Our monitoring environment aids system administrators in keeping track of the activities on such systems with much lower time requirements than that of perusing typical log files. With many systems connected to the network the task becomes significantly more difficult. If an attack is identified on one system then all systems have likely been attacked. The ability to correlate activity among multiple machines is critical for complete analysis and monitoring of the environment. Developing an effective organization of the nodes (systems) on the display is a nontrivial task. The organization must clearly show activity on all systems simultaneously while not cluttering the display or unnecessarily distracting the user. This paper discusses the layout techniques we have experimented with and their effectiveness.
Algorithms II
icon_mobile_dropdown
Line and net pattern segmentation using shape modeling
Adam Huang, Gregory M. Nielson, Anshuman Razdan, et al.
Line and net patterns in a noisy environment exist in many biomedical images. Examples include: Blood vessels in angiography, white matter in brain MRI scans, and cell spindle fibers in confocal microscopic data. These piecewise linear patterns with a Gaussian-like profile can be differentiated from others by their distinctive shape characteristics. A shape-based modeling method is developed to enhance and segment line and net patterns. The algorithm is implemented in an enhancement/thresholding type of edge operators. Line and net features are enhanced by second partial derivatives and segmented by thresholding. The method is tested on synthetic, angiography, MRI, and confocal microscopic data. The results are compared to the implementation of matched filters and crest lines. It shows that our new method is robust and suitable for different types of data in a broad range of noise levels.
Extracting motion velocities from 3D image sequences and coupled spatio-temporal smoothing
Tobias Preusser, Martin Rumpf
Recent image machinery delivers sequences of large scale three-dimensional (3D) images with a considerably small sampling width in time. In medical as well as in engineering applications the interest lies in underlying deformation, growth or motion phenomena. A robust method is presented to extract motion velocities from such image sequences. To avoid an ill-posedness of the problem one has to restrict the study to certain motion types, which are related to the concrete application. The derived formulas for the motion velocities clearly reflect the geometry of the motion. Robustness of the presented implementation is based on local regularizations in space-time. Thereby geometric quantities on the image sequences are evaluated on the local regularizations. Examples outline the potential of the proposed method in medical applications (3D ultrasound sequences) and experimental fluid dynamics (3D flow in porous media). As an improved regularization approach an effective denoising method based on anisotropic geometric diffusion for 3D data sets is discussed, which respects important features on levelsets such as edges and corners and accelerated motions and preserves them during the smoothing process. Its application as a pre-processing step turns out to be especially advisable for image sequences with a considerably small signal to noise ratio.
Interaction
icon_mobile_dropdown
Audio-visual situational awareness for general aviation pilots
Weather is one of the major causes of general aviation accidents. One possible cause is that the pilot may not absorb and retain all the weather information she is required to review prior to flight. A second cause is the inadequacy of in-flight weather updates: pilots are limited to verbal updates via aircraft radio contact with a ground-based weather specialist. We propose weather visualization and interaction methods tailored for general aviation pilots to improve understanding of pre-flight weather data and improve in-flight weather updates. Our system, Aviation Weather Environment (AWE), utilizes information visualization techniques, a direct manipulation graphical interface, and a speech-based interface to improve a pilot's situational awareness of relevant weather data. The system design is based on a user study and feedback from pilots.
Enabling multipurpose image interaction in modular visualization environments
Fotis Chatzinikos, Helen Wright
Recent improvements in processor and graphics power mean researchers can now interact with their running simulation at the same time as they view the results. This so-called 'computational steering' brings greater insight to the investigation process and is even more compelling when immersed in the visual representation of the data. Visualization of pre-computed data likewise benefits if we can generate the image without the need for separate dials and sliders. This image-based interaction, both for computational steering and visualization purposes, is therefore an important goal for scientists and engineers but typically requires specialist programming of the components. This paper introduces a new architecture that enables Modular Visualization Environments (MVEs) -- general-purpose, extensible systems -- to operate in this way. Novel elements include an input-describing data structure to convey information about the simulation parameters, and proxy graphics objects that convert ordinary image geometry into interactive elements. To promote its adoption, the architecture is designed such that the usual MVE appearance is retained. Its implementation in IRIS Explorer and Open Inventor is described and three case studies are presented: two deal with computational steering, whilst the third shows how a visualization technique, the contour plot, can be modified to take advantage of image-based interaction.
Large Scale Data Visualization
icon_mobile_dropdown
Visualising large hierarchies with Flextree
Hongzhi Song, Edwin P. Curran, Roy Sterritt
One of the main tasks in Information Visualisation research is creating visual tools to facilitate human understanding of large and complex information spaces. Hierarchies, being a good mechanism in organising such information, are ubiquitous. Although much research effort has been spent on finding useful representations for hierarchies, visualising large hierarchies is still a difficult topic. One of the difficulties is how to show both tructure and node content information in one view. Another is how to achieve multiple foci in a focus+context visualisation. This paper describes a novel hierarchy visualisation technique called FlexTree to address these problems. It contains some important features that have not been exploited so far. In this visualisation, a profile or contour unique to the hierarchy being visualised can be gained in a histogram-like layout. A normalised view of a common attribute of all nodes can be acquired, and selection of this attribute is controllable by the user. Multiple foci are consistently accessible within a global context through interaction. Furthermore it can handle a large hierarchy that contains several thousand nodes in a PC environment. In addition results from an informal evaluation are also presented.
LOD-based clustering techniques for efficient large-scale terrain storage and visualization
Xiaohong Bao, Renato Pajarola
Large multi-resolution terrain data sets are usually stored out-of-core. To visualize terrain data at interactive frame rates, the data needs to be organized on disk, loaded into main memory part by part, then rendered efficiently. Many main-memory algorithms have been proposed for efficient vertex selection and mesh construction. Organization of terrain data on disk is quite difficult because the error, the triangulation dependency and the spatial location of each vertex all need to be considered. Previous terrain clustering algorithms did not consider the per-vertex approximation error of individual terrain data sets. Therefore, the vertex sequences on disk are exactly the same for any terrain. In this paper, we propose a novel clustering algorithm which introduces the level-of-detail (LOD) information to terrain data organization to map multi-resolution terrain data to external memory. In our approach the LOD parameters of the terrain elevation points are reflected during clustering. The experiments show that dynamic loading and paging of terrain data at varying LOD is very efficient and minimizes page faults. Additionally, the preprocessing of this algorithm is very fast and works from out-of-core.
Scientific Visualization
icon_mobile_dropdown
Visual identification of structure in four-dimensional data
Robert R. Johnson, Rodney Millar, Isaac Ben
Structure in 4-D data is visualized with a new modeling algorithm called SBP. The SBP vector fusion algorithm makes 3-D display space models of data having any dimensionality that is input in matrix form. SBP maps points on complete manifolds in 4-D to 3-D to visualize any 4-D data. Starting with familiar shapes in 2-D data, 3-D models are constructed to demonstrate how SBP works. Then 3-D data is modeled in 3-D display space. Finally 4-D data are modeled in 3-D display space. The 3-D display space models are points mapped from collections of points on 4-D manifolds. Two types of SBP models are discussed: the latitude/longitude collection and the helical collection. SBP also maps points on complete manifolds of n-D data to 3-D display space models. The objective of this work is to present what 4-D spheres and tori look like when visualized from 4-D data using the SBP algorithm. This demonstrates the SBP algorithm as a new and useful tool for visualizing and understanding 4-D data, and by implication, n-D geometry. Future uses for SBP could be modeling and studying protein structure and space-time structure in general relativity and string theory.
Marsoweb: a collaborative web facility for Mars landing site and global data studies
D. Glenn Deardorff, Virginia C. Gulick
Marsoweb is a collaborative web environment that has been developed for the Mars research community to better visualize and analyze Mars orbiter data. Its goal is to enable online data discovery by providing an intuitive, interactive interface to data from the Mars Global Surveyor and other orbiters. Recently, it has served a prominent role as a resource center for those involved in landing site selection for the Mars Explorer Rover 2003 missions. In addition to hosting a repository of landing site memoranda and workshop talks, it includes a Java-based interface to a variety of datamaps and images. This interface enables the display and numerical querying of data, and allows data profiles to be rendered from user-drawn cross-sections. High-resolution Mars Orbiter Camera (MOC) images (currently, over 100,000) can be graphically perused; browser-based image processing tools can be used on MOC images of potential landing sites. An automated VRML atlas allows users to construct "flyovers" of their own regions-of-interest in 3D. These capabilities enable Marsoweb to be used for general global data studies, in addition to those specific to landing site selection. As of September 2002, over 70,000 distinct users from NASA, USGS, academia, and the general public have accessed Marsoweb.
Visualizing realistic 3D urban environments
Aaron Lee, Tuolin Chen, Michael Brunig, et al.
Visualizing complex urban environments has been an active research topic due to its wide variety of applications in city planning: road construction, emergency facilities planning, and optimal placement of wireless carrier base stations. Traditional 2D visualizations have been around for a long time but they only provide a schematic line-drawing bird's eye view and are sometimes confusing to understand due to the lack of depth information. Early versions of 3D systems have been developed for very expensive graphics workstations which seriously limited the availability. In this paper we describe a 3D visualization system for a desktop PC which integrates multiple resolutions of data and provides a realistic view of the urban environment.
Visualization of experimental earthquake data
Gunther H. Weber, Marco Schneider, Daniel W. Wilson, et al.
We present a system that visualizes displacement, acceleration, and strain that were measured during an earthquake simulation experiment in a geotechnical centrifuge. Our visualization tool starts by reading the data describing experiment set-up and displaying this data along with icons for the sensors used during data acquisition. Different sensor types (measuring acceleration, displacement and strain) are indicated by different icons. One general experiment set-up is used in a sequence of simulated earthquake events. Once a user has selected a particular event, measured data can be displayed as a two-dimensional (2D) graph/plot by clicking the corresponding sensors. Multiple sensors can be animated to obtain a three-dimensional (3D) visualization of measured data.
Poster Session
icon_mobile_dropdown
Symplectic ray tracing: a new approach to nonlinear ray tracing by using Hamiltonian dynamics
This paper describes a method of symplectic ray tracing for calculating the flows of non-linear dynamical systems. Symplectic ray tracing method traces the path of photons moving along the orbit calculated by using Hamilton's canonical equation. Using this method, we can simulate non-linear dynamical systems with various dimensions, accurate calculation, and quick implementation of scientif visualization system. This paper also demonstrates some visualization results of non-linear dynamical systems computed by using symplectic ray tracing method.
Automatic traffic real-time analysis system based on video
Liya Ding, Jilin Liu, Qubo Zhou, et al.
Automatic traffic analysis is very important in the modern world with heavy traffic. It can be achieved in numerous ways, among them, detection and analysis through video system, being able to provide affluent information and having little disturbance to the traffic, is an ideal choice. The proposed traffic vision analysis system uses Image Acquisition Card to capture real time images of the traffic scene through video camera, and then exploits the sequence of traffic scene and the image processing and analysis technique to detect the presence and movement of vehicles. First getting rid of the complex traffic background, which is always changing, the system segment each vehicle in the region the user interested. The system extracts features from each vehicle and tracks them through the image sequence. Combined with calibration, the system calculates information of the traffic, such as the speed of the vehicles, their types, the volume of flow, the traffic density, the waiting length of the lanes, the turning information of the vehicles, and so on. Traffic congestion and vehicles’ shadows are disturbing problems of the vehicle detection, segmentation and tracking. So we make great effort to investigate on methods to dealing with them.
StudyDesk: interactive data analysis and scientific visualization in a semi-immersive environment
Peter Stephenson, Pedro Branco, Joachim Tesch, et al.
We present a highly interactive large-scale visualization environment for performing the tasks of detection and classification of sonar contacts in a low signal to noise environment. The system described deals with simulated passive sonar data from an advanced towed array system, chosen for both the amount and high dimensionality typical of the data produced. Our prototype application employs a semi-immersive 3D display system, multiple and mixed modalities of interaction and feedback, and state-of-the-art volumetric visualization and abstraction techniques. Our two-stage approach seeks to use human talents, experience and intuition to leverage and direct high-performance computing resources. The first stage (search&detect) aims at providing advanced visualization and human/machine interface techniques to enable sonar operators to quickly and confidently detect contacts in low signal-to-noise dataspaces. In the second component (analyze&classify), we utilize a highly interactive volumetric representation of the tactical SensorSpace that the operator can interrogate using various visual and auditory cues. This presentation describes the application scenario, approach and implementation of the visualization environment and concludes with experiences, lessons learned, and future directions from both the interactive large-scale visualization as well as an application point of view.
VRML visualization system of the Castelo de Vide aquifer and interaction with the Sever River
Luis Ribeiro, Carlos Tavares Ribeiro, Jose P. Monteiro, et al.
This paper presents the development process of a visualization system for groundwater diffuse and concentrated flows, and quality parameters, within the same continuous media, based on the integration of VRML solutions within groundwater simulation models. The system is being implemented within Castelo de Vide aquifer for a virtual reality visualization of simulations and variability analysis of the aquifer and Sever river interaction, concerning climate and anthropogeneous factors, beyond pumping for water supply, on the 3D model of the aquifer, using data available in the Research Center. The implementation process meaning a compatibility platform between the FEM and VR environment converts data attributes into graphic qualities or shape modifications containing additional information such as the user's initial point of view, what lights are available, the background, and other contextual information is presented. An interface for three dimensional virtual reality is based on a standard web browser equipped with any one of many freely available plug-in's will be also described.
INSPECT: a dynamic visual query system for geospatial information exploration
Sang-Joon Lee, James K. Hahn, Alfred M. Powell Jr., et al.
This paper presents a visual information exploration tool called INSPECT. INSPECT provides geospatial information analysts with an effective way to visually filter multidimensional data and explore the underlying information contained within it. In geospatial intelligence information analyses, it is necessary to query, visualize and understand the data combined with location information. These operations are not simple since they include complex database queries of both spatial and non-spatial data. Moreover, analysts need to repeatedly query and visualize data until they reach a desirable conclusion. Using INSPECT, analysts are able to experimentally query the database avoiding complex database schema and visualize the results in geospatial context with minimal effort. The tools available with INSPECT include see-through lens visualization, relationship visualization, time varying analysis, saved lens-filter sessions, a data reachback capability, and iterative visual exploration.
A faster technique for rendering meshes in multiple display systems
Randall E. Hand, Robert J. Moorhead II
Level of detail algorithms have widely been implemented in architectural VR walkthroughs and video games, but have not had widespread use in VR terrain visualization systems. This thesis explains a set of optimizations to allow most current level of detail algorithms run in the types of multiple display systems used in VR. It improves both the visual quality of the system through use of graphics hardware acceleration, and improves the framerate and running time through moifications to the computaitons that drive the algorithms. Using ROAM as a testbed, results show improvements between 10% and 100% on varying machines.