Proceedings Volume 1459

Extracting Meaning from Complex Data: Processing, Display, Interaction II

Edward J. Farrell
cover
Proceedings Volume 1459

Extracting Meaning from Complex Data: Processing, Display, Interaction II

Edward J. Farrell
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 1 June 1991
Contents: 9 Sessions, 30 Papers, 0 Presentations
Conference: Electronic Imaging '91 1991
Volume Number: 1459

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Volume Visualization
  • Multiple-Variable Techniques
  • Unstructured or Qualitative Information
  • Invited Presentation
  • Interpretation of Spatial Data
  • Animation Methods
  • Invited Presentation
  • Images and Sound for Data Interpretation
  • Software Systems
  • Applications
Volume Visualization
icon_mobile_dropdown
Octree optimization
Al Globus
A number of algorithms search large 3D arrays (computation space) for features of interest. Marching cubes isosurface generation described by Lorenson and Cline is an example. The speed of these algorithms is dependent on the time necessary to find the features of interest in the data and to compute their graphic representation. Efficiently searching for these features is the topic of this paper. The author describes an optimizing search using octrees to divide computation space. When the tree is walked, information stored in the branch nodes is used to prune portions of computation space thus avoiding unnecessary memory references and tests for features of interest. This technique was implemented for marching cubes isosurface generation on computational fluid dynamics data. The code was then adapted to continuing particle traces in multiple-zoned data sets when a trace leaves one zone and enters another. For multiple isosurfaces, numerical experiments indicate a factor of 3.8 - 9.0 overall performance increase, measured by stopwatch; and a factor of 3.9 - 9.9 speedup in calculation times as measured by the UNIX times (Omicron) 2 utility. The overhead is a one-time cost of 0.2 - 2.8 times the time to compute an average isosurface and O(n) space with a constant factor less than one, where 'n' is the number of grid points.
Adaptive isosurface generation in a distortion-rate framework
Paul C. Ning, Lambertus Hesselink
The problem of accurately modeling a level surface with polygons is cast in a distortion-rate framework and efficient tilings across a range of resolutions are found. A distinctive feature of this work is the quantification of surface approximation error. Existing algorithms for extracting polygonal isosurfaces from sampled 3-D scalar fields typically partition the samples into cubical cells and generate triangles for each cell. Adaptive schemes use variable-size cells to allocate more triangles in regions where the tilings do poorly, and vice versa. In this paper, an octree structure is imposed on the data and then selectively pruned according to local error. To guide the pruning, an ideal level surface is defined from the 3-D samples, and a distortion measure is introduced to quantify the error associated with any polygonal approximation to the ideal. Using the polygon count as the rate measure, the performance of tilings is then characterized by distortion-rate pairs. With this performance criterion, the pruning algorithm then finds good approximations at multiple resolutions. Results are presented for both simulated and experimental data sets. Lower resolution models are used for interactive viewing and editing while full resolution models are usually reserved for final analysis or presentation.
Modeling and visualization of scattered volumetric data
Gregory M. Nielson, Tim Dierks
This paper is concerned with the problem of analyzing and visualizing volumetric data. Volumetric data is a collection of fourtuples, (xi, yi, zi; Fi), i equals 1, ..., N where Fi is the value of a dependent variable at the location of the independent variables (xi, yi, zi). No assumptions are made about the location of the samples of the independent variables. Most of the currently available methods for visualizing volumetric data assume that the independent data sites are at points of a cuberille grid. In order to make these methods available for the more general situation of scattered, volumetric data, a modeling function F(x, y, z) can be determined and then sampled on a cuberille grid. This report covers some techniques for obtaining the modeling relationship and reports on the results of some experiments involving the application of these methods.
Multiple-Variable Techniques
icon_mobile_dropdown
High-speed integrated rendering algorithm for interpreting multiple-variable 3-D data
Tatsuo Miyazawa
Many data visualization problems require both volumetric data representing sampled scalar or vector functions of 3D spatial dimensions and geometric data representing 3D geometric objects to be displayed together in a single image. In 3D data visualization with multiple variables, it is necessary to use various representations in order to extract the relations between several variables. This paper proposes an integrated rendering algorithm for visualizing 3D volumetric and geometric data such as surfaces, lines, and points, simultaneously with depth information, and another algorithm for speeding up the first. The approach proposed is to extend a volume rendering algorithm based on ray-tracing so that it can handle both 3D volumetric and geometric data. The algorithm processes these data in accordance with their original representation formats to eliminate conversion artifacts such as spurious or missing surfaces, and also gives special treatment to volume segments so as to avoid errors in visibility at the intersections between the volume segments and the geometric data. It uses several techniques to improve the performance of the rendering process. Adaptive termination of ray-tracing, elimination of rays that do not intersect the volume, and adaptive undersampling over a pixel plane improve the performance by three to seven times over the brute-force approach. The cost and versatility of the algorithm are evaluated by using data from the results of 3D computational fluid dynamics.
Interactive graphics system for multivariate data display
Richard A. Becker, William S. Cleveland, William M. Shyu, et al.
Advances in computer graphics have permitted statisticians to devise dynamic graphics displays which allow them to show, control, and interact with moving pictures of data. These include the display of a rotating point clouds and a technique known as 'brushing.' The software system described in this paper brings together these two dynamic techniques for analyzing multivariate data. Color is used to enhance the effect of brushing, and full-color stereoscopic display is used in 3-D point cloud rotation. This system, which is designed to run on Silicon Graphics IRIS 4D-series workstations, implements the two techniques in a highly integrated way. It also adds capabilities for manipulating labels associated with data points, and has a graphical user interface that enhances the use of the methods for data analysis. It is capable of being used interactively with S, a general programming environment for data analysis and graphics.
Unstructured or Qualitative Information
icon_mobile_dropdown
Visualization tool for human-machine interface designers
Michael P. Prevost, Carolyn P. Banda
As modern human-machine systems continue to grow in capabilities and complexity, system operators are faced with integrating and managing increased quantities of information. Since many information components are highly related to each other, optimizing the spatial and temporal aspects of presenting information to the operator has become a formidable task for the human-machine interface (HMI) designer. The authors describe a tool in an early stage of development, the Information Source Layout Editor (ISLE). This tool is to be used for information presentation design and analysis; it uses human factors guidelines to assist the HMI designer in the spatial layout of the information required by machine operators to perform their tasks effectively. These human factors guidelines address such areas as the functional and physical relatedness of information sources. By representing these relationships with metaphors such as spring tension, attractors, and repellers, the tool can help designers visualize the complex constraint space and interacting effects of moving displays to various alternate locations. The tool contains techniques for visualizing the relative 'goodness' of a configuration, as well as mechanisms such as optimization vectors to provide guidance toward a more optimal design. Also available is a rule-based design checker to determine compliance with selected human factors guidelines.
Project DaVinci
Norman Winarsky, Joanna R. Alexander
This paper describes one of the first projects -- Project Da Vinci -- tasked by the NIDL. DaVinci is a long-term project which will require research and development in diverse areas including data fusion, multi-dimensional model-based rendering, multi-dimensional data visualization, and database browsing. It is designed to have broad impact across the government, industrial, medical and academic communities. Part of the charter is to gain the support of the best resources available in these areas.
Visualization of manufacturing process data in N-dimensional spaces: a reanalysis of the data
Ann C. Fulop, Donald M. Allen, Gerhard Deffner
As process engineers are forced to better understand and control their manufacturing processes, the ability to identify the relationships between variables becomes important to the control of the processes. There are various methods available to facilitate the identification of quantitative relationships, but all are somewhat limited in their ability to characterize important relationships involving large numbers of variables and vast amounts of data. A previous study compared the ability of subjects to identify relationships on the basis of several different modes of displaying process data. The study also explored options for data visualization techniques that would aid the identification of complex relationships by combining data display capabilities of high resolution graphic work stations and the pattern recognition capabilities of humans. Results showed that subjects who could accurately use a three-dimensional display had faster response times than subjects who used a two-dimensional display. This paper discusses a more in depth analysis of the data from the previous study. Specifically, it examines the influences of data visualization style, perceptual complexity, and informational complexity on the user's response times and accuracy. The reanalysis confirms and further explains earlier findings. That is, three-dimensional displays improve performance when information displayed is perceptually complex, whereas informational complexity is best displayed in two dimensions. The applicability of this research to process characterization, statistical analyses, and other software tools is discussed.
Visual thinking in organizational analysis
Charles E. Grantham
The ability to visualize the relationship among elements of large complex databases is a trend which is yielding new insights into several fields. The author demonstrates the use of 'visual thinking' as an analytical tool to the analysis of formal, complex organizations. Recent developments in organizational design and office automation are making the visual analysis of workflows possible. An analytical mental model of organizational functioning can be built upon a depiction of information flows among work group members. The dynamics of organizational functioning can be described in terms of six essential processes. Furthermore, each of these sub-systems develop within a staged cycle referred to as an enneagram model. Together these mental models present a visual metaphor of healthy function in large formal organizations; both in static and dynamic terms. These models can be used to depict the 'state' of an organization at points in time by linking each process to quantitative data taken from the monitoring of the flow of information in computer networks.
Invited Presentation
icon_mobile_dropdown
Virtual environment technology
David L. Zeltzer
Since the late 1960s and early 1970s researchers have been building novel display devices-- including head-mounted displays (HMDs)--and a variety of manual input devices, including force input and output. With the advent of powerful graphic workstations, and relatively inexpensive HMDs and glove-like input devices, however, interest in 'virtual environments' seems to be rising exponentially. In this paper the key components of a virtual environment-- autonomy, interaction and presence--are described. Autonomy is a qualitative measure of the capability of computational models to act and react to simulated events and stimuli. Interaction measures the degree of access to model parameters at runtime, ranging from batch processing with no interaction to comprehensive, real-time access to all model parameters. Presence is a rough measure of the number and fidelity of available sensory input and output channels. Work on representing and controlling synthetic autonomous agents for virtual environments will be briefly reviewed. Videotaped examples will be shown.
Interpretation of Spatial Data
icon_mobile_dropdown
Analysis and representation of complex structures in separated flows
James L. Helman, Lambertus Hesselink
The authors discuss recent work on extraction and visualization of topological information in separated fluid flow data sets. As with scene analysis, an abstract representation of a large data set can greatly facilitate the understanding of complex, high-level structures. When studying flow topology, such a representation can be produced by locating and characterizing critical points in the velocity field and generating the associated stream surfaces. In 3D flows, the surface topology serves as the starting point. The 2D tangential velocity field near the surface of the body is examined for critical points. The tangential velocity field is integrated out along the principal directions of certain classes of critical points to produce curves depicting the topology of the flow near the body. The points and curves are linked to form a skeleton representing the 2D vector field topology. This skeleton provides a basis for analyzing the 3D structures associated with the flow separation. The points along the separation curves in the skeleton are used to start tangent curve integrations. Integration origins are successively refined to produce stream surfaces. The map of the global topology is completed by generating those stream surfaces associated with 3D critical points.
Three-dimensional visualization and quantification of evolving amorphous objects
Deborah E. Silver, Norman J. Zabusky
Studying the stability, evolution, and interaction of coherent structures over time-varying data sets is the essence of discovery in many branches of science. In this paper, the authors discuss the process of visiometrics: visualizing, extracting, quantifying, and mathematizing evolving amorphous objects. This concept is applied to data sets from fluid dynamical problems. In particular, for three-dimensional phenomena, it is shown how this approach enhances understanding of the topology and kinematics of these problems.
Radiative tetrahedral lattices
Jesse W. Driver III, William Chris Buckalew
Advanced graphics techniques for rendering photo-realistic scenes have seen little use in scientific visualization due to uncertainty as to whether the higher computational cost is worth the increased visual realism. This paper presents a new low-cost method, containing more visual cues for interpreting complex dynamic data. The method has been used to visualize neurobiology data with neuron synapse modeled as a radiative burst. Images are produced at the rate of 3 per minute for a 395 neuron simulation. Implementation of this method was accomplished using the Illumination Networks energy-balance rendering technique introduced at SIGGRAPH '89. Work is underway to parallelize the algorithm which should yield interactive speeds. Two exclusive features of this method are (1) the ability to represent dynamic changes in data by mapping these changes to brightness of polyhedral objects in an abstract data space; the light emitted by these objects into the surrounding scene forms another dimension for data interpretation, and (2) the ability to better represent complex data geometry on a finite resolution computer screen. Because objects emit light, their effects on the environment can be seen even if the objects are smaller than a pixel when mapped to the screen or are obscured by other objects in the scene.
Brain surface maps from 3-D medical images
Jiuhuai Lu, Eric W. Hansen, Michael S. Gazzaniga
The anatomic and functional localization of brain lesions for neurologic diagnosis and brain surgery is facilitated by labeling the cortical surface in 3D images. This paper presents a method which extracts cortical contours from magnetic resonance (MR) image series and then produces a planar surface map which preserves important anatomic features. The resultant map may be used for manual anatomic localization as well as for further automatic labeling. Outer contours are determined on MR cross-sectional images by following the clear boundaries between gray matter and cerebral-spinal fluid, skipping over sulci. Carrying this contour below the surface by shrinking it along its normal produces an inner contour that alternately intercepts gray matter (sulci) and white matter along its length. This procedure is applied to every section in the set, and the image (grayscale) values along the inner contours are radially projected and interpolated onto a semi-cylindrical surface with axis normal to the slices and large enough to cover the whole brain. A planar map of the cortical surface results by flattening this cylindrical surface. The projection from inner contour to cylindrical surface is unique in the sense that different points on the inner contour correspond to different points on the cylindrical surface. As the outer contours are readily obtained by automatic segmentation, cortical maps can be made directly from an MR series.
Detection and visualization of porosity in industrial CT scans of aluminum die castings
Lee T. Andrews, Joseph W. Klingler, Jeffery A. Schindler, et al.
During the production of aluminum die castings, regions of porosity are created in the resultant part. These regions may or may not have an effect on the overall quality of the part depending on the location of the porosity. If the porosity is located in areas which do not require machining, such as drilling and tapping, the porosity will not affect the performance of the part and therefore does not result in a reject. However, if the porosity provides a pathway for passage of air or liquid from one chamber to another the porosity will result in a defective part. Porosity is measured in a variety of ways such as fluoroscopy, pressurization in a water tank and other techniques of nondestructive testing (NDT). This project was undertaken to evaluate if industrial computed tomography (CT) scanning could detect porosity and to measure the connectivity of detected porosity to determine if the part is a 'leaker.' Scans were taken of aluminum die castings and then analyzed to determine if the porosity present could be detected with this method. Three-dimensional gray scale mathematical morphology algorithms were developed to extract regions of porosity while maintaining the geometric integrity of the data. Once the regions of porosity were found, connectivity analysis was performed to determine if the porosity provided a path between chambers of the castings. Visualization of the porosity contained within the casting was accomplished using apE (Animation Production Environment) from the Ohio Supercomputer Graphics Project. Using transparency and color, the regions of porosity can easily be seen within the 3D renderings of the part. Due to the large quantity of data the morphological analysis and renderings were done on a Cray Y/MP supercomputer at the Ohio Supercomputer Center. This process shows promise in the design of new castings and in the quality control of existing production.
Vector and scalar field interpretation
Edward J. Farrell, Trond Aukrust, Josef M. Oberhuber
Complex computations and simulations often produce multiple scalar and vector 3D fields which require display and interpretation. This paper presents an approach to imaging scalar fields based on an optical model with several interactive options. Vector fields are imaged with flow line and moving comets, and related to the scalar field using color coded 2D images. The 3D structure bounding the computed fields is imaged as a solid, and is a spatial reference for the data. The utility of the methods are illustrated with data from a large scale simulation of ocean circulation in the Norwegian-Greenland seas.
Animation Methods
icon_mobile_dropdown
Network visualization: user interface issues
Richard A. Becker, Stephen G. Eick, Eileen O. Miller, et al.
Seenet is a system for network visualization in which statistics that describe the operation of a network can be displayed graphically. In this system, a network is presented on a computer screen along with various user-operated sliders, buttons, and toggles that allow direct manipulation of the display, in order to reveal information about the state of the network. The user interface was designed to promote rapid interaction and ease of use, which are critical to the success of this system. Features of Seenet include: a screen design in which most of the area is utilized for the network display, color usage that is consistent and meaningful, mouse actions on the network display to bring up auxiliary information, a novel 2-sided slider, animation for showing time sequences, and 'brushing' for selecting subsets of the network nodes.
Computer animation method for simulating polymer flow for injection-molded parts
Meg W. Perry, Richard C. Rumbaugh, David P. Frost
This method pertains to polymer flow data visualization in the field of computer-aided engineering (CAE) known as injection molding. Flow front data output from injection molding analysis generates a flat-shaded animated display representative of polymer flow into a mold cavity. This is superior to static graphics as it enables the engineer to examine and identify undesired filling patterns and allows CAE engineers to make quicker and more accurate predictions.
System for making scientific videotapes
Perry A. Appino, Edward J. Farrell
This paper presents vuemovie, a program for the interactive production of scientific videotapes. Vuemovie provides an environment for composing, previewing, editing, and recording a script representation of a videotape. A script contains animation sequences, titles, and transitions that are easily manipulated with a text editor. Animation sequences can be precomputed or generated on-the-fly and can be recorded in real-time or frame-by-frame. Components of the system include a videotape recorder server, a script command language, and a user interface. Implementations of vuemovie on two visualization workstations are described.
Interactive analysis of transient field data
Robert R. Dickinson
Many recently developed visualization systems have tended to exclude direct user control of the real-time 'coordinate' of workstation displays, relative to the extent that they provide for direct control of the use of the spatial coordinates. That is, for displaying a given static (time- invariant) object, the user is in direct control of the level of detail of spatial information. 'Zooming-in' means that more detail is displayed within the specified spatial clipping limits, and no information is displayed beyond these limits. There has tended to be no analogous approach to the use of the real-time coordinate of the display. In this paper, the authors introduce a method for explicitly providing for direct interactive user control over the use of the real-time coordinate. In addition to the usual VCR control buttons, in this system the user has interactive control over the playback speed, and the start time and finish time along the t coordinate of the underlying field domain. Sliding the start and finish time scales is analogous to altering the spatial clipping limits of the viewing projection. If the start and finish times are moved closer to each other and the play speed is reduced, then a higher resolution sequence is computed and displayed, and display information beyond these limits is discarded. This involves real-time monitoring of compute and display cycles, to faithfully exploit the real time of the display as a visualization dimension. This capability empowers the user with the ability to easily focus particular interest on features at particular parts of the space and time of the underlying field.
Invited Presentation
icon_mobile_dropdown
Process of videotape making: presentation design, software, and hardware
Robert R. Dickinson, Dan R. Brady, Tim Bennison, et al.
The use of technical video tape presentations for communicating abstractions of complex data is now becoming commonplace. While the use of video tapes in the day-to-day work of scientists and engineers is still in its infancy, their use as applications oriented conferences is now growing rapidly. Despite these advancements, there is still very little that is written down about the process of making technical videotapes. For printed media, different presentation styles are well known for categories such as results reports, executive summary reports, and technical papers and articles. In this paper, the authors present ideas on the topic of technical videotape presentation design in a format that is worth referring to. They have started to document the ways in which the experience of media specialist, teaching professionals, and character animators can be applied to scientific animation. Software and hardware considerations are also discussed. For this portion, distinctions are drawn between the software and hardware required for computer animation (frame at a time) productions, and live recorded interaction with a computer graphics display.
Images and Sound for Data Interpretation
icon_mobile_dropdown
Global geometric, sound, and color controls for iconographic displays of scientific data
Stuart Smith, Georges G. Grinstein, Ronald M. Pickett
The authors introduce the Exvis exploratory data visualization system. This system uses a display technique based on visual texture perception to reveal structure in multidimensional data and includes a sound output facility for simultaneous sonification of data. The elementary unit of the display is a glyph, or 'icon,' whose attributes are data-driven. Global display controls for icon geometry, sound, and color have been added to the original system. A global control is a transformation that applies to the entire icon completely independently of the mapping of specific data parameters to specific icon attributes. These controls allow the user to maximize both the visual contrast and the auditory contrast available for a given choice of icon and a given mapping of data parameters to icon attributes, and they allow the user to selectively enhance different features in an iconographic display. Using these controls to manipulate displays of computer-generated multidimensional data, the authors have been able to obtain pictures that exhibit well-differentiated texture regions even though the data that produce these regions have no differences in their first-order statistics. The global display controls are most interactive when Exvis is implemented on a computing platform such as the Connection Machine, which can redraw an iconographic picture as rapidly as the user can manipulate the controls.
Using sound to extract meaning from complex data
Carla Scaletti, Alan B. Craig
In analyzing abstract data sets, it is useful to represent them in several alternative formats, each format bringing out different aspects of the data. While most work in data mapping has focused on visual representations, it has been found that sonic representations can also be effective aids in interpreting complex data, especially when sonification is used in conjunction with visualization. The authors have developed prototypes for several high-level sonification tools that can be applied to a wide variety of data. While they used programmable multi- processor digital signal processing hardware to develop and experiment with these prototypes, each of these tools could be implemented as special-purpose hardware or software for use by scientists in specific applications. The prototype tools include: Mapper (maps data to various sonic parameters), Comparator (feeds a different mapping into each speaker channel), Sonic Histogram (maps the magnitude of each category onto the amplitude of its associated sound), Shifter (shifts signals into the audible range), Marker (a sonic alarm that marks a specific condition). These tools were tested by using them to generate data-driven sound tracks for video animations generated from the same data. The resulting videos provide an increased data bandwidth and an increased sense of virtual reality to the viewer.
Software Systems
icon_mobile_dropdown
Visualization and comparison of simulation results in computational fluid dynamics
Wolfgang Felger, Peter Astheimer
Current visualization systems give little or no support for user interaction when investigating data and comparing simulation results. If a simulation produces a sequence of images, it is difficult to compare the frames in mind. However, such a comparison is of great importance to the simulation analyst, in order to find out correlation in data, especially in non-consecutive data records. Thus, it is essential to present two or more data records (represented by static images) in a comparable manner. Comparison can be achieved with static techniques, or emphasized with the help of animation tools. There exist several techniques concerning the presentation and merging of data records to be compared. A comparison can operate on complete data records as well as on selected subareas. For the detailed analysis of the simulation, graphical representations like color coding are often too unprecise; an identification of image areas should return the exact values of related data (i.e., take a data probe). In this paper, the authors investigate comparison and identification techniques in the visualization process. A prototype realizing such techniques is presented. The evaluation and verification of several techniques is accomplished with a case study, a 2D simulation in computational fluid dynamics, investigating the time-dependent flow in a rotating channel (e.g., fuel supply for a jet-powered helicopter rotor).
Object-oriented data management for interactive visual analysis of three-dimensional fluid-flow models
Sandra S. Walther, Richard L. Peskin
A strategy that allows researchers interactively to visualize, analyze, and query scientific data from computational modeling is presented. The strategy, implemented in an object-oriented visualization interface, addresses a key issue in interactive data analysis, namely, how the user can maintain a direct connection between the visual representation on the screen and the actual data set. This is accomplished by constructing hashing maps to a data set of objects based on the unique spatio-temporal coordinates of each object. A plane is defined as a two-dimensional array of objects; a volume, as an array of planes. A timeseries is an array of volumes. The indices of the arrays are mapped to the mesh values of the computation along the relevant axis. These data structures provide the programmatic elements to manage data as volumes over time. The data itself is collected into an indexed list as it occurs; the volumetric temporal organization is constructed as a set of maps that point to the location of the original data. All queries to the data are framed as queries about these maps and are in effect queries about the computational mesh. Thus, from one data organization format, nonlocal values in the mesh can be retrieved; polygons, triangles and contours can be constructed and visualized. Physical formations (pathlines, streamlines, streaklines) can be computed.
Visual workbench for analyzing the behavior of dynamical systems
Peter Cahoon
A collection of numerical, visual, and symbolic methods were combined in order to provide interactive, intelligent analysis of dynamical systems and the geometry of their behaviors. The analysis system deals specifically with the topics of continuation and bifurcation encountered in the solution of non-linear ordinary differential equations and certain types of boundary value problems. This system serves as a workbench composed of several modular features. A graphical interface to the continuation software is used to investigate the range of general branching behaviors. Trajectories of particular interest are then read into a vector field visualizer for further analysis. On-line help and general system inquires are implemented using a non-monotonic reasoner. The volumetric visualizations of these fields can be interactively viewed and then rendered using ray tracing software. The individual frames are then recorded on video tape and played back in an animated sequence. The workbench provides a complete environment for the analysis of most aspects of the geometry of dynamical system behavior.
Applications
icon_mobile_dropdown
Efficient extraction of local myocardial motion with optical flow and a resolution hierarchy
Geetha Srikantan, David B. Sher, Edward J. Newberger
The authors develop methods for the recovery of myocardial motion from two dimensional echocardiograms, (2DE). Echocardiograms are sonograms of a beating heart. Echocardiography is an important, widely used, non-invasive tool in the diagnosis of heart disease. The authors attempt to measure the velocity of local motion to help separate myocardial regions from blood volume and to classify the behavior of those regions. Multiresolution methods have not been applied to estimation of the local motion in non-rigid highly textured imagery such as 2DE. Earlier work uses the method of optical flow after applying a large median filter to the 2DE. Optical flow estimates the apparent two dimensional velocity in a sequence of images from brightness changes between successive images. A study of the application of optical flow at several resolutions is presented. Results indicate which resolutions are most appropriate for optical flow.
Collaborative processing to extract myocardium from a sequence of two-dimensional echocardiograms
Shriram V. Revankar, David B. Sher, Steven Rosenthal
Echocardiography is an important clinical method for identification and assessment of the entire spectrum of cardiac diseases. Visual assessment of the echocardiograms is tedious and subjective, but on the other hand, owing to the poor quality of the data, the automatic techniques are unreliable. One can minimize these drawbacks through collaborative processing. The authors describe a collaborative method to extract the myocardium from a sequence of two-dimensional echocardiograms. Initially, a morphologically adaptive thresholding scheme generates a rough estimate of the myocardium, and then a collaborative scheme refines the estimate. The threshold is computed at each pixel as a function of the local morphology and a default threshold. The points that have echodensities greater than the threshold form a rough estimate of the myocardium. This is collaboratively refined in accordance with the corrections specified by the operator, through mouse gestures. The gestures are mapped on to an image processing scheme that decides the precise boundaries of the intended regions that are to be added to or deleted from the estimated myocardium.
Visualizing underwater acoustic matched-field processing
Lawrence Rosenblum, Behzad Kamgar-Parsi, Margarida Karahalios, et al.
Matched-field processing is a new technique for processing ocean acoustic data measured by an array of hydrophones. It produces estimates of the location of sources of acoustic energy. This method differs from source localization techniques in other disciplines in that it uses the complex underwater acoustic environment to improve the accuracy of the source localization. An unexplored problem in matched-field processing has been to separate multiple sources within a matched-field ambiguity function. Underwater acoustic processing is one of many disciplines where a synthesis of computer graphics and image processing is producing new insight. The benefits of different volume visualization algorithms for matched-field display are discussed. The authors show how this led to a template matching scheme for identifying a source within the matched-field ambiguity function that can help move toward an automated source localization process.
BRICORK: an automatic machine with image processing for the production of corks
Roger Davies, Bento A. Brazio Correia, Fernando D. Carvalho, et al.
The production of cork stoppers from raw cork strip is a manual and labour-intensive process in which a punch-operator quickly inspects all sides of the cork strip for defects and decides where to punch out stoppers. He then positions the strip underneath a rotating tubular cutter and punches out the stoppers one at a time. This procedure is somewhat subjective and prone to error, being dependent on the judgement and accuracy of the operator. This paper describes the machine being developed jointly by Mecanova, Laboratorio Nacional de Engenharia e Tecnologia (LNETI) and Empresa de Investiga&sigmafcoe Desenvolvimento de Electronica SA (EID) which automatically processes cork strip introduced by an unskilled operator. The machine uses both image processing and laser inspection techniques to examine the strip. Defects in the cork are detected and categorised in order to determine regions where stoppers may be punched. The precise locations are then automatically optimised for best usage of the raw material (quantity and quality of stoppers). In order to achieve the required speed of production these image processing techniques may be implemented in hardware. The paper presents results obtained using the vision system software under development together with descriptions of both the image processing and mechanical aspects of the proposed machine.