Proceedings Volume 0375

Medical Imaging and Image Interpretation

Judith M. S. Prewitt
cover
Proceedings Volume 0375

Medical Imaging and Image Interpretation

Judith M. S. Prewitt
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 1 November 1982
Contents: 1 Sessions, 97 Papers, 0 Presentations
Conference: 1st International Symposium on Medical Imaging and Image Interpretation 1982
Volume Number: 0375

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • All Papers
All Papers
icon_mobile_dropdown
Contrast Transmission In Medical Image Display
Stephen M. Pizer, John B. Zimmerman, R. Eugene Johnston
The display of medical images involves transforming recorded intensities such at CT numbers into perceivable intensities such as combinations of color and luminance. For the viewer to extract the most information about patterns of decreasing and increasing recorded intensity, the display designer must pay attention to three issues: 1) choice of display scale, including its discretization; 2) correction for variations in contrast sensitivity across the display scale due to the observer and the display device (producing an honest display); and 3) contrast enhancement based on the information in the recorded image and its importance, determined by viewing objectives. This paper will present concepts and approaches in all three of these areas. In choosing display scales three properties are important: sensitivity, associability, and naturalness of order. The unit of just noticeable difference (jnd) will be carefully defined. An observer experiment to measure the jnd values across a display scale will be specified. The overall sensitivity provided by a scale as measured in jnd's gives a measure of sensitivity called the perceived dynamic range (PDR). Methods for determining the PDR fran the aforementioned PDR values, and PDR's for various grey and pseudocolor scales will be presented. Methods of achieving sensitivity while retaining associability and naturalness of order with pseudocolor scales will be suggested. For any display device and scale it is useful to compensate for the device and observer by preceding the device with an intensity mapping (lookup table) chosen so that perceived intensity is linear with display-driving intensity. This mapping can be determined from the aforementioned jnd values. With a linearized display it is possible to standardize display devices so that the same image displayed on different devices or scales (e.g. video and hard copy) will be in sane sense perceptually equivalent. Furthermore, with a linearized display, it is possible to design contrast enhancement mappings that optimize the transmission of information from the recorded image to the display-driving signal with the assurance that this information will not then be lost by a -further nonlinear relation between display-driving and perceived intensity. It is suggested that optimal contrast enhancement mappings are adaptive to the local distribution of recorded intensities.
Chest Radiograph Enhancement Using The Weighted Unsharp Mask
P H Jackson, G Kaye
This paper defines the weighted unsharp mask image enhancement technique and illustrates its use applied to the image processing of chest radiographs. This technique is a generalisation of the simple unsharp mask, the variable threshold zonal filter and the gradient inverse weighted mean. It has the property of sharpening or de-blurring an image while controlling two characteristic degradations of the normal unsharp mask, namely the emphasis of low level noise and the over-enhancement of well defined edges.
A Restoration Technique For Bio-Medical Image Blurred Due To Defocussing Effect
B. Chanda, B. B. Chaudhuri, D. Dutta Majumder
An image restoration technique suitable for defocussed images with small noise is proposed here. It is tried to reduce the no. of computations by minimizing correlation between original image and noise process during estimation and also by applying Parseval's theorem on constraint identity. Present algorithm is, hence, quite economic.
Design And Evaluation Of Median Filters For Scintigraphic Image Filtering
F. Deconinck, R. Luypaert
The problem of reducing image noise and aberrant structures while conserving image sharpness is approached in the context of scintigraphy. Suitably constructed median filters are shown to be superior to conventional smoothing techniques in these applications.
Image Restoration And Interactive Processing For Analysis Of High Resolution Banded Chromosomes.
Jim Piper
Digitised prometaphase cells are digitally filtered to restore the image quality. The filter used is based on a non-linear noise filter which corrects only highly deviant picture elements, followed by a 4 by 4 convolution approximation to a Laplacian function, which provides enhancement at the maximum spatial frequency at which the microscope transmits an appreciable signal together with substantial suppression of noise of higher frequency. The system continues by finding the chromosome axis using a skeletonisation procedure, straightens this axis and aligns the bent chromosome along it, thus straightening the chromosome. Sets of straLght chromosomes of like class can then be presented side by side with identified landmark bands aligned.
Computer Simulation Of 3D Imaging In Holography: A Preliminary Study For Automated Holographic Microscopy
B. Bianco, F. Beltrame, A. Chiabrera
The problem of simulating the observation of a hologram from an assigned point of view is considered. It is shown that there exists a simple transformation between the angular spectra of the hologram and of the image seen by the observer.
Advances In Microphotometry
Hans-Georg Zimmer
The sensitivity of microphotometers has been im-proved by signal processing, compensating fluctuations in the light source, and by modifying the beam pass. A signal/noise ratio of 4,000 was achieved which allows the detection of invisible variations of light intensities. The spatial resolution of mechanical scanning stages is close to the resolution of the microscope. When equipped with optimal diaphragms the scanning microphotometer can be regarded as a linear imaging system and characterized by a transfer function. It pro-vides digital images of microscopical specimens without loss of spatial resolution but improved contrast.
Focus Adjustments In Linear Systems
Horst Linge, Hans-Georg Zimmer, Volker Neuhoff
The well-known effects of defocusing in analog linear imaging systems allow to define an infinite class of focus measures in the frequency domain. All measures are calculated as weighted squared moduli of the spectrum. In digital linear systems a subset of them yields useful focus measures. They are applicable to every object, define the physical focus, and can be translated back to the space domain, where the computation is less expensive. The sum of the squares of a modified second derivative of the image pixel values seems to be the best focus meas-ure for practical purposes.
Computer Vision And Sampling Considerations In Tv Microscopy
H. Harms, H. M. Aus
One critical and often neglected factor in TV microscopy is the sampling rate necessary to mea-sure the many subcellular particles seen in the light microscope. The size of many of these cell components is in the same range as the limit of the resolving power of the light microscope. Chromatin distribution patterns in the nuclei, cytoplasm granulation, and chromosomes are only a few examples. The correct sampling of these small particles requires a sampling rate greater than the Nyquist criteria postulate for an ideal system because the sampling function and the MTF of the microscope is not precisely band limited. However, too large sampling rates lead to truncation errors as well as huge amounts of data which have to be processed. This paper shows how the sampling rate can be estimated from the aliasing error and truncation error. The required sample rate to analyse chromosomes and leukemic cells for example is in the range between 20 and 30 pixels/micron.
Structural Analysis Of The Coronary Arterial Tree
J. J. Gerbrands, J.H. C. Reiber, B. Scholts, et al.
Computer-aided analysis of coronary cine-angiograms provides ways to extract information about the structure of the coronary arterial tree from these films. The delineation of the coronary tree in sin-gle frames will be described, as well as the assessment of local contraction patterns from two ortho-gonal cine-angiograms. The procedure to detect the coronary tree consists of a gated background subtraction method, followed by a region growing process. The Hilditch-skeleton forms the basis for the construction of an attributed binary tree, where the nodes correspond with bifurcations in the arterial system. Minimum-cost tree-matching procedures on two series of attributed binary trees from two orthogonal cine-angiograms are used for the assessment of local epicardial contraction patterns.
Computer-Aided Quantitation Of The Severity Of Coronary Obstructions From Single View Cineangiograms
C. J. Kooijman, J.H. C. Reiber, J. J . Gerbrands, et al.
The computer-aided quantitative analysis of co-ronary obstructions from digitized coronary cineangiograms is described. First, the assessment of the percentage diameter reduction from single view angiograms is discussed. This method requires the delineation of the contours of the artery and the analysis of the diameter function. Next, a densitometric method is described to transform the brightness values in the digital image into cali-brated X-ray absorption profiles, thus creating the possibility to assess percentage area reduction of obstructions from single views.
Quantitative Angiography: Experimental Studies On The Representation Of Model Coronary Arteries In Angiographic Films
M. Siebes, M. Gottwik, M. Schlepper
The hemodynamics of an arterial stenosis are of great diagnostic importance and mainly depending on the absolute dimensions of the vascular lesion. Therefore, it is important to get the true vessel borders from angiographic film. In this study the effect of various conditions on the re-presentation of model coronary arteries on angiographic film is presented. A series of model coronary arteries consisting of silicone rubber tubings with different inner diameter from 1.0 to 3.8 mm and different wall thickness from 0.3 to 1.0 mm were angiographied at 25 f/sec. Urografin 766 in concentrations from 30 to 100 % was used for perfusion. The cinefilms were digitized by manual tracing of the vessel borders and were quantitatively analyzed with a PDP 11/34 minicomputer. It could be shown that the studied conditions had an effect on edge detection, and therefore on estimation of stenosis geometry. It is concluded that automated image processing systems should be tested for their accuracy not only with brass models, but with more realistic models under conditions simulating coronary angiography.
A Critical Examination Of Angiographic Stenosis Quantitation By Digital Image Processing
K. Barth, U. Faust, A. Both, et al.
The assessment or measurement of arterial narrowings should allow the quantitative prediction of a reduced perfusion distal to the lesion. Certainly the degree of relative diameter narrowing is a basic parameter herefore which can directly be measured in the angiogram. The visual estimation is quite erroneous, even if only the maximal diameter reduction has to be found. The determination of other quantities like the stenotic area reduction requires at least some calculation on the computer or can only be done if several reference measures and thresholds are taken into account. This is the case if the sclerotic volume reduction /2/ is to be calculated slice by slice over a certain length, perhaps from two projections and discriminating contrast medium from calcium shadows /12/. If hemodynamic conclusions are to be drawn from the evaluations, the magnification factor is also needed /10/ and a dynamic flow measurement should be done for final cross-checking /3/.
Functional Imaging Of Still Photo Sequences In Fluorescein Angiography Using The Image Scanner Osiris
L. Olsson, K. Carlsson
Dynamic fluorescein angiography is a method to study the peripheral blood flow of a patient. A sequence of photographs is taken after intravenous injection of fluorescein. These photographs have been digitized using the interactive image scanner OSIRIS. A calibration method has been developed to obtain comparable results from photo-sequences taken on different occasions. The temporal change in the examined area is studied, and the results are presented as one or more functional images. These are computer generated images, where the colours (or grey levels) represent a time scale. Information from the entire picture sequence is included in a functional image. Functional images have been generated for the appearance and saturation times. It has been possible to carry out this analysis on an ordinary 16-bit mini computer.
Image Enhancement Of Chest Radiographs Using Local Statistical Information
M Cocklin, G Kaye, I Kerr, et al.
The use of statistically based image enhancement techniques applied to chest radiographs has been investigated. These techniques include Unsharp Masking, Statistical Differencing and Median Filtering. Their properties are studied and similarities highlighted. In addition, the artefacts produced in the processing are considered, along with techniques for reducing their effects. The application to radiographic images of the chest is described, with consideration of computational requirements.
Structured Noise Removal By Using A High Flexible 2D Fir Digital Filter
R. Hecker, S. J. Poppl
Depending on the adjustment of data acquisition devices, medical images may be degraded by structured noise, which has to be removed before further steps of image processing techniques like contour finding or feature extraction are applied. In the 2D frequency domain of an image, structured noise usually is characterized by a pattern with relatively sharp bounded regions. So, to loose not too much image information by filtering, it is necessary to apply a digital filter, whose bandstop-region has nearly the same pattern as the structured noise region. In this paper, a polynomal approximation algorithm within the scope of the window method of designing 2D Finite-duration Impulse Response (FIR) digital filters is presented, with which it is possible to design lowpass-, highpass-, bandpass-, and bandstop-filters of arbitrary shape, size and position in the 2D frequency domain. The effect of enhancement by this method on a 2D-echocardiogram degraded by ripple structured noise is demonstrated.
Systematic Analysis Of The Grating Lobe Of Ultrasonographic Array Directivity
P.J.'t Hoen
In ultrasonography, arrays of small elements are used to build up a real-time sonographic image of the human anatomy. The element spacing is the main factor governing the position and intensity of the grating lobes. A grating lobe is a spurious lobe of the directivity function of the array. Image resolution and image interpretation are negatively influenced by such grating lobes. For continuous wave excitation, extensive literature exists for the grating lobe phenomenon. We, however, shal investigate the practical situation, i.e., a short acoustical pulse is emitted. The influence of element-spacing, slit-width, focal-length, apodization, etc. will be analyzed. A special FORTRAN program has been written to calc-ulate the directivity function. The computed results are backed up by some experiments.
The Imaging Performance Of A Multiwire Proportional Chamber Positron Camera(+)
Victor Perez-Mendez, Alberto Del Guerra, Walter R. Nelson, et al.
A new design - fully three dimensional -Positron Camera is presented, made of six MultiWire Proportional Chamber modules arranged to form the lateral surface of a hexagonal prism. A true coincidence rate of 56000 c/s is expected with an equal accidental rate for a 400 iCi activity uniformly distributed in a --3k water phantom. A detailed Monte Carlo program has been used to investigate the dependence of the spatial resolution on the geometrical and physical parameters. A spatial resolution of 4.8 mm FWHM has been obtained for a 18F point-like source in a 10 cm radius water phantom. The main properties of the limited angle reconstruction algorithms are described in relation to the proposed detector geometry.
Optimization Of The Spatial Resolution Of Moving Object Imaging With Medical X-Ray Systems
P.J.'t Hoen
The spatial resolution is described in terms of the modulation transfer function (HTF). We will analyze the influence of the focal spot size, the movement of the object, and the resolution of the receptor. The quality of images can only be properly described if the visual system is taken into account. Consequently, we base the MTF quality criterion on the visual detection of the corresponding line-spread and edge-spread images. It appears that there is a positive correlation between this psychophysical quality and the spatial frequency for which the modulation transfer equals 0.25. This finding enables a set of characteristic parameters and nomograms to be developed, which combine the lucidity of the "unsharpness" concept with the exactness of the description by the modulation transfer function,
Digital Enhancement Of Pneumothoraces
M Cocklin, G Kaye, I Kerr, et al.
If a patient presents with symptoms indicative of a pneumothorax it is improbable that it would not be detected in a chest radiograph. However, detection on the radiograph can be difficult and a small pneumothorax may be missed when there is no clinical suspicion of its presence. This report presents some methods by which the characteristic pneumothorax edge may be enhanced by digital image processing. Various examples are given.
Clustering Of Left Ventricular Wall Motion Patterns
Z. Bjelogrlic, J. Jakopin, L. Gyergyek
A method for detection of wall regions with similar motion was presented. A model based on local direction information was used to measure the left ventricular wall motion from cineangiographic sequence. Three time functions were used to define segmental motion patterns: distance of a ventricular contour segment from the mean contour, the velocity of a segment and its acceleration. Motion patterns were clustered by the UPGMA algorithm and by an algorithm based on K-nearest neighboor classification rule.
Computer Quantitation Location, Extent And Type Of Thallium-201 Myocardial Perfusion Abnormalities*
JHC Reiber, SP Lie, ML Simoons, et al.
To determine quantitatively the location, extent and type of Thallium-201 myocardial perfusion abnormalities from early and late post-exercise scintigrams, a computerized method has been devel-oped. This method is based on : 1) spatial registration of the early and late images using external markers, 2) automated boundary detection of the left ventricular activity structure, 3) bilinear interpolative background subtraction, 4) circumferential profile analysis in the early and late images and 5) computation of washout circumferential profile. Localization, extent and type of abnormalities are identified by automatic computer comparison of a patient's profile with corresponding limits of normal profiles. The single user-interaction consists of indicating the apex in the 3 views of the study (ANT, LA045, LA065). Normal limits have been defined by the 10 and 90 percentiles assessed from a set of 15 normal subjects.
Correlation Study Of Diffenrential Skin Temperatures (DST) For Ovulation Detection Using Infra-Red Thermography
K.H. S. Rao, A. v. Shah, B. Ruedi
The importance of ovulation time detection in the Practice of Natural Birth Control (NBC) as a contraceptive tool, and for natural/artificial insemination among women having the problem of in-fertility, is well known. The simple Basal Body Temperature (BBT) method of ovulation detection is so far unreliable. A newly proposed Differential Skin Temperature (DST) method may help minimize disturbing physiological effects and improve reliability. This paper explains preliminary results of a detailed correlative study on the DST method, using Infra-Red Thermography (IRT) imaging, and computer analysis techniques. Results obtained with five healthy, normally menstruating women volunteers will be given.
A Segmentation Method For Cell Image Based On Two-Dimensional Histogram
Yoshio Noguchi
A scene segmentation method is proposed for Papanicolaou stained gynecologic cervical cells. The method is based on the maximum likelihood classifier for two-dimensional pixels consisted of optical densities specified at points on a red image and a green one. The images were provided as two scanned monochromic images illuminated with lights of 610nm and 535nm in wavelengths. Distributions of the pixels form a two-dimensional histogram, and there are three clusters supposed to be about the pixels, namely, clusters for background's pixels, cytoplasmic pixels, and nuclear pixels. This paper describes a method to calculate unknown parameters, namely the mean vector, the covarience matrix and the constant, in an unnormalized density function for each cluster of the pixels from the two-dimensional histogram by assuming the distribution to be normal.
Object Oriented Cell Image Segmentation
B. Nordin, E. Bengtsson, B. Dahlqvist, et al.
A correct segmentation of cell images into nucleus, cytoplasm and background is a prerequi-site for a working automatic pre-screening device for cervical cytology. This paper presents an algorithm for determining the segmentation thresholds. It is based on a-priori assumptions about a cell's shape and size and works on one object at a time, disregarding everything else in the image. The algorithm is capable of verifying that the isolated object really looks like a cell; an essential feature in an automatic system. The nucleus and cytoplasm thresholds are decided upon almost independently of each other. The algorithm works by tracking iso-density contours around the object to be isolated and its execution time is thus proportional to the length of the contour rather than the area of the image. Some preliminary results are given and the possibility of efficiently implementing the algorithm in hardware is discussed.
Smoothing, Thresholding And Contour Extraction In Images From Gated Blood Pool Studies
H. Bunke, H. Feistl, H. Niemann, et al.
Due to its noninvasive nature, nuclear imaging has become very important for the diagnosis of human heart diseases. In this paper, experiments are described with different operations for smoothing, thresholding, and contour extraction in images from gated blood pool studies. The goal is to locate the heart and the left ventricle in an image. The methods described here are part of a larger system for the automatic analysis of images from gated blood pool studies.
Variable Thresholding Applied To Angiography
G.M. X. Fernando, D. M. Monro
This paper describes a method for automatically determining the outline of the left ventricle(LV) from its angiograms. It is shown that when the histograms of the pictures are relatively narrow, some of the existing techniques are unsuitable for boundary detection. It is further shown that by appropriate partitioning of the pictures, an existing automatic thresholding technique can be applied to these pictures. The discontinuities that occur in the thresholded pictures are eliminated by the use of suitable 2-D filters. The techniques thus developed are applied to two particular LV angiograms at end-systole and at end -diastole.
Learning Effects In ROC And Contrast Detail Observer Performance Testing With Medical Images
C. A. Kelsey, R. D. Moseley, F. A. Mettler, et al.
Studies have been conducted to determine whether learning effects are present in contrast detail or receiver operating characteristic (ROC) observer performance tests. Anecdotal responses by more than 20 observers report the presence of learning in both types of observer performance tests. Such learning effects are reported to be strongest during viewing of the first few test im-ages. Detailed investigation of the responses from over 17,000 film readings failed to show a statistically significant learning effect.
Noise Limited Performance Of The Visual System.
Peter Zuidema, Jan J. Koenderink, Armand M.M . Lelkens, et al.
We present results of recent experiments concerning the processing of signals in the visual system under conditions limited by both natural (Poissonean) and artificial white noise. The results for natural noise limited conditions show that at any location in the visual field sampling apertures of different size are simultane-ously present. The dominant aperture size decreases with increasing irradiance. In the temporal domain it appeared that presentation of the signals over a longer period than 1 sec. did not improve the performance anymore. In the second part results of experiments with static and moving artificial noise are presented. It appears that the visual system is capable of processing almost all of the information in the image in order to set its threshold limited by noise. Detection of coherent movement of details within dynamic white noise can be performed with a signal to noise ratio of 1%. Models which describe these phenomena are discussed and their possible impact on image processing is suggested.
CT-Images As The Basis Of Operation Planning In Stereotactical Neurosurgery
W. Schlegel, H. Scharfenberg, J . Doll, et al.
A method for stereotactic radiosurgery planning by information extraction from CT-images is presented. Information extracted from the CT-images is used to calculate the stereotactic approach and the three dimensional dose. distribution for a brain tumour, which is to be treated. by interstitial radiosurgery. Computer programs have been developed which allow a fast interactive treatment planning and the visualisation of the information which is necessary for surgery and radiation dosimetry. The described procedure is a combination of different image processing, reconstructing and display techniques with the aim of precise, stereometrical information extraction.
Ultrasound Images, Tissue Information And Tissue Characterisation
S. Leeman, J. P. Jones
Ultrasound signals scattered from human tissues contain a wealth of information about tissue properties, in addition to morphology. The range of that information is explored here, and it is demonstrated that processing techniques to extract data about tissue crystallise into three categories: pulse-echo imaging, quantitative imaging, and tissue characterisation. The last may be subdivided into parameter estimation, structure characterisation, and dynamic methods, and a succinct and critical review of current techniques is made.
A Mixed Approach Of Automated ECG Analysis
A. K. De, J. Das, D.Dutta Majumder
ECG is one of the non-invasive and risk-free technique for collecting data about the functional state of the heart. However, all these data-processing techniques can be classified into two basically different approaches -- the first and second generation ECG computer program. Not the opposition, but simbiosis of these two approaches will lead to systems with the highest accuracy. In our paper we are going to describe a mixed approach which will show higher accuracy with lesser amount of computational work. Key Words : Primary features, Patients' parameter matrix, Screening, Logical comparison technique, Multivariate statistical analysis, Mixed approach.
A New Filtering Processor For ECG-EEG Signals
V. Cappellini, P. L. Emiliani
Digital processing systems are examined for ECG and EEG signal analysis, in particular to perform filtering and spectral estimation operations. A new special digital processor is presented to perform multiple filterings and spectral estimations on several biomedical signals as ECG and EEG. Two implementation solutions are presented, having different structure and complexity. Experimental results obtained by using the digital processors for analysing EEG signals are shown for high efficiency filtering and spectral estimation. In particular the interest of using the processors to perform 2-D analysis on ECG-EEG signals is outlined.
Enhancement Of Visual Evoked Potentials By Adaptive Processing
W. Wolf, U. Appel, H. Rauner
Transient evoked potentials (EP) are variations of the on-going electroencephalogram (EEG) in response to the application of sensory stimuli. Since their amplitudes are very small in comparison to the spontaneous EEG, signal extraction methods must be applied to them before their characteristics are measureable. Several signal ex-traction methods which are actually used in EP research are outlined, especially those showing an adaptive characteristic. As a further development, a new method is proposed which considers the on-going EEG preceding the stimulus application for the EP processing. The computational procedure will be described and some preliminary results are given.
Eye Movement Automated Analysis: A New Improved Approach
B. Morra, R. Albera, G. Massia, et al.
An improved system for eye movement automated analysis is presented. The program (NOUS: Nystragmus Off-line Understanding System) specially designed for electronystagmographic recording interpretation in clinical use, relies - as the key element - on the "fast phase" recognition obtained through some tests performed on the instant eye angular velocity, sampled at 100 Hz rate. However, a major improvement was introduced, consisting in the modelling of a "finite state automaton" through which each sample is processed before its definitive interpretation. The Authors discuss about a) the characteristics of this automaton dictated by the physiological pattern of vestibulooculomotor neuronal organization and eyeball mechanical properties and b) the kinds of artifacts that are correctly identified and rejected.
Psychophysical Comparison Of A Video Display System To Film By Using Bone Fracture Images
George W. Seeley, Mark Stempski, Hans Roehrig, et al.
This study investigated the possibility of using a video display system instead of film for radiological diagnosis. Also investigated were the relationships between characteristics of the system and the observer's accuracy level. Radiologists were used as observers. Thirty-six clinical bone fractures were separated into two matched sets of equal difficulty. The difficulty parameters and ratings were defined by a panel of expert bone radiologists at the Arizona Health Sciences Center, Radiology Department. These two sets of fracture images were then matched with verifiably normal images using parameters such as film type, angle of view, size, portion of anatomy, the film's density range, and the patient's age and sex. The two sets of images were then displayed, using a counterbalanced design, to each of the participating radiologists for diagnosis. Whenever a response was given to a video image, the radiologist used enhancement controls to "window in" on the grey levels of interest. During the TV phase, the radiologist was required to record the settings of the calibrated controls of the image enhancer during interpretation. At no time did any single radiologist see the same film in both modes. The study was designed so that a standard analysis of variance would show the effects of viewing mode (film vs TV), the effects due to stimulus set, and any interactions with observers. A signal detection analysis of observer performance was also performed. Results indicate that the TV display system is almost as good as the view box display; an average of only two more errors were made on the TV display. The difference between the systems has been traced to four observers who had poor accuracy on a small number of films viewed on the TV display. This information is now being correlated with the video system's signal-to-noise ratio (SNR), signal transfer function (STF), and resolution measurements, to obtain information on the basic display and enhancement requirements for a video-based radiologic system. Due to time constraints the results are not included here. The complete results of this study will be reported at the conference.
Visibility Of Details When Situated Between Sharp Luminance Discontinuities.
G. J. van der Wildt, R. G. Waarts
The effect of sharp luminance discontinuities (thin lines and edges) on the visibility of sinusoidal gratings has been measured as a function of the distance between the discontinuities and the gratings. The distance from which such a discontinuity can affect the contrast sensitivity depends on the spatial frequency of the grating itself, and can be up to several degrees for gratings of low spatial frequency. The influence of the luminance discontinuities is also a function of the luminance of the thin lines and the magnitude of the edges. No difference could be found between the effect of edges and the effect of very thin (5 min. of arc) lines on the contrast sensitivity.
Diagnosis Improvement Of Medical Images By Combination Of Computer Measurements And Visual Judgements
J.P. J. de Valk, E.G . J . Eijkman
Medical images, especially those composed artificially by computer reconstruction, may contain more information than the human eye can perceive. Despite the fact that humans are excellent visual pattern recognizers, additional information extracted by computation may en-hance diagnosis capability. This study compares visual judgements of gamma camera images and shows that classification improvement can be obtained if human judgements and statistical output are combined. ROC analysis accounts for quality assessment of human judgements. For a particular set of grey level cooccurrence measures a Fisher linear discriminant is calculated. After comparing performance of judgements and discriminant figures another Fisher discriminant is composed of both human judgements and statistical measures, showing the possibility of thus reducing the number of false diagnoses.
QUANTITATIVE ENZYMATIC AND IMMUNOLOGIC HISTOPHOTOMETRY OF DISEASED HUMAN KID-NEY TISSUES USING TV-CAMERA AND COMPUTER ASSISTED IMAGE PROCESSING SYSTEMS.
G. Heinert, W. Mondorf
High speed image processing was used to analyse morphologic and metabolic characteristics of clinically relevant kidney tissue alterations.Qualitative computer-assisted histophotometry was performed to measure alterations in levels of the enzymes alkaline phosphatase (Ap),alanine aminopeptidase (AAP),g-glutamyltranspepti-dase (GGTP) and A-glucuronidase (B-G1) and AAP and GGTP immunologically determined in prepared renal and cancer tissue sections. A "Mioro-Videomat 2" image analysis system with a "Tessovar" macroscope,a computer-assisted "Axiomat" photomicroscope and an "Interactive Image Analysis System (IBAS)" were employed for analysing changes in enzyme activities determined by changes in absorbance or transmission.Diseased kidney as well as renal neoplastic tissues could be distinguished by significantly (wilcoxon test,p<0,05) decreased enzyme concentrations as compared to those found in normal human kidney tissues.This image analysis techniques might be of potential use in diagnostic and prognostic evaluation of renal cancer and diseased kidney tissues.
Histochemical Applications Of Image Analysis Techniques
G. R. Hillman, D. Johnston, S. W. Kwan, et al.
Elementary image analysis techniques have been used in the analysis of microscope pictures of histochemically labeled tissues. Tissues were treated with labeling substances having specificity for a molecular species of pharmacologic interest. Either fluorescent or dense labels were used in different situations. The objective of analysis is to measure the extent or intensity of labeling, or to describe the number or geometric properties of the labeled structures. The experiments reported here demonstrate the conversion of qualitative pictorial data to quantitative data which can be related to drug effects in a manner familiar to biomedical researchers.
Automated Computer Input Of Paperwritten ECG Records By Image Analysis
S. Haenel
Further development of automated ECG classification requires computer internal representation of well diagnosed ECG of clinical archives, but those ECG are mainly present as paper recorded charts. For chart digitization a high resolution TV camera, connected with digital computer and a soft-ware package for automated image pro-cessing, adapted to the special properties of paper recorded ECG signals is usable. An adapted image processing scheme consists of the following steps: After reconstruction of the lost zero voltage by local filtering of horizontal sections of ECG signal, convex contours of the traces are recognized by gradients. The recognition procedure ensures true acquisition of ECG signal also in the critical ranges of peaks. Full chart length is processed by several frames scanned in particular overlay and connected after curve extraction by a fast correlation method. The result of image processing is an internal representation of paperwritten ECG records, compatible with the electronical ECG recording format.
A Method For Evaluating Features Of Outlined Components Of An Image
Luigi P. Cordella
A method is proposed for accessing the inside of outlined contours so as to evaluate density features of the contoured regions while measuring their perimeter and area. Such features are among the most commonly used in biomedical image processing, but for large images their correct evaluation is not straightforward unless to have an adequate amount of main memory available. This is especially true when the components of an image are outlined by methods which do not give rise to simple contours. Compared with other methods, the one we propose requires a much smaller amount of main memory, while remaining enough fast for many purposes.
New Approaches Towards Quantitative Echography
F. Hottier, M. Fink, J. Donjon, et al.
Among all the acoustic parameters of a biological tissue, the attenuation is an indicator sensitive to the pathology of an organ. Considering the basic non stationary properties of the ultrasonic signal caused by the filtering effect of the biological medium as well as by diffraction phenomena, a time-frequency representation of the signal, such as the short time Fourier analysis, is needed. The use of indicators such as the centroid of the running power spectrum allows then to evaluate the attenuation, at a local scale. Other algorithms based on the A line intensity measurements in selected frequency bands provide also the attenuation as well as information on the statistical properties of the medium. It will be shown that the effect of the system (diffraction and transducer acoustoelectric response) can be partially deconvolved.
A New Digital Approach To Synthesizing Ultrasonic Images In Real-Time
E. Schuster
By using digital processing techniques and by exploiting a new technology which became available only recently, a radically new approach to synthesizing ultrasonic images in real time has become feasible. The most striking features of this alternative imaging scheme are: real-time imaging, 8 bit quantization, improved fidelity, variable resolution and point-shaped mapping.
Ultrasonic Transmission Imaging In Mfdical Diagnosis
H. Brettel, U. Roeder, C. Scherg, et al.
In contrast to the well known puls-echo technique, ultrasonic transmission imaging provides information on the attenuating properties of the tissue under examination. The method is based on the same principles as optical imaging with lenses and the results are displayed in a view comparable to x-ray images. We report on technical characteristics of an ultrasonic transmission imaging system and present results showing details of the skeleton as well as soft tissue structures of normal persons.
Acoustics, Packaging, And Processing: Changing Trends In Medical Diagnostic Ultrasound
Charles F. Hottinger, Mark D. Johnson
In the fifteen years since medical diagnostic ultrasound has become widely available, several trends have emerged. First, pulse-echo technology has supplanted all competing acoustic mechanisms. Second, packaging and display modes have become increasingly important. Certain specialized types of systems, such as dedicated A-mode or M-mode scanners, have been replaced by more versatile B-mode imaging systems configured to meet specific clinical needs. Finally, signal processing techniques, either image enhancement or non-image analysis and tissue characterization, are receiving increased attention. Innovation seems to be moving from acoustic issues to packaging and processing issues. While dramtic improvements in image quality are still possible and probable, changes in the next few years may be less acoustic oriented than packaging and processing specific.
Characterization Of Intermediate Cells In Normal And Abnormal Cervical Smears
M. Pahlplatz, D. Zahniser, P. Oud, et al.
To test the model of cervical carcinoma being a field carcinogenic process, inducing even in normal appearing cells chromatin structural changes which can be measured by cytophotometry, we digitized populations of 50 intermediate cell nuclei in normal and abnormal cervical smears. The slides were stained with the Thionin Feulgen Congo Red stain. In a trial for sample classification we used averages and coefficients of variation per slide of 20 nuclear features. First analysis results, using the statistical pattern recognition package ISPAHAN, show differences between the classes, but the classification is difficult due to large intraclass variances. These variations might be caused by subpopulations of progressive, stable and regressive lesions in each abnormal class and (non)malignant changes in the normals. Indeed, limiting the normals to a subset of "real" normals revealed a tendency in feature space to more abnormality (distance to normals) along with the a priori classification of smears.
Shape Analysis Of Ostracodes : Feature Extraction, Classification And Automation
D. Adelh, J. M. Chassery
In shape analysis theory, notion of ovoidal shape is well determined by Fourier analysis. In this paper we present experimental results about the classification of small crustaceas based on shape analysis. First a set of parameters is presented charcterizing the notion of ovoidal shape. Next that set of features is automated on an Microscopic Image Analyser. To prove the discriminatory power of such features, we compare the rate of good classification on a population using these automated features and the Fourier descriptors.
A Recursive Classification System With Colour Feature Correction For Bone Marrow Cells
Haig Dolabdjian
In automated cell classification colour features are very critical because of the varying staining quality. A method is presented how to correct colour features with a colour correction matrix in a recursive classification system and to make them useful for classification purposes. The system is based on calculating an individual colour correction matrix for each preparation subimage by comparison of the extracted average chromaticity coordinates of that cell of the subimage, which is correct classified with the highest probability using geometrical and textural features only, and known reference average chromaticity coordinates of the actual cell types.
Computer-Aided Diagnosis Of Leukemic Blood Cells
U. Gunter, H. Harms, M. Haucke, et al.
In a first clinical test, computer programs are being used to diagnose leukemias. The data collected include blood samples from patients suffering from acute myelomonocytic-, acute monocytic- and acute promyelocytic, myeloblastic, prolymphocytic, chronic lymphocytic leukemias and leukemic transformed immunocytoma. The proper differentiation of the leukemic cells is essential because the therapy depends on the type of leukemia. The algorithms analyse the fine chromatin texture and distribution in the nuclei as well as size and shape parameters from the cells and nuclei. Cells with similar nuclei from different leukemias can be distinguished from each other by analyzing the cell cytoplasm images. Recognition of these subtle differences in the cells require an image sampling rate of 15-30 pixel/micron. The results for the entire data set correlate directly to established hematological parameters and support the previously published initial training set .
A Color Metric As A Tool For Cytologic Image Analysis
C. Garbay
A color texture or image can be seen as a. hierar chical imbrication of primitives of different sizes, shapes, colors and textures. To characterize it, we need thus to define both a color image model and a. featuring approach. A color metric is defined from which a. global measure (strength) and a local operator are derived. The local operator is used as an homogeneity criteria in the framework of a quadtree description of cell images, where a given color region is characterized by its strength. Some texture features are computed over the q-images, which correspond merely to the extension of a classical featuring approach. The discriminatory powers of the classical and extended parameters are finally compared on a set of bone marrow cells.
An Quantitative Analysis Method Of Trabecular Pattern In A Bone
Masanor Idesawa, Toyohiko Yatagai
Orientation and density of trabecular pattern observed in a bone is closely related to its mechanical properties and deseases of a bone are appeared as changes of orientation and/or density distrbution of its trabecular patterns. They have been treated from a qualitative point of view so far because quantitative analysis method has not be established. In this paper, the authors proposed and investigated some quantitative analysis methods of density and orientation of trabecular patterns observed in a bone. These methods can give an index for evaluating orientation of trabecular pattern quantitatively and have been applied to analyze trabecular pattern observed in a head of femur and their availabilities are confirmed. Key Words: Index of pattern orientation, Trabecular pattern, Pattern density, Quantitative analysis
Quantitative Ultrasound Scatter Imaging
Joie Pierce Jones, Sidney Leeman, Jonathan Blackledge
Ultrasound/tissue interactions provide a fertile base for imaging techniques, and a number of novel and apparently diverse methods for human imaging are currently being developed. An original, unified theory of these quantitative scatter-imaging techniques is developed. It is demonstrated that the success of the methods is inextricably bound together with the development of physical models for the ultrasound/tissue interaction. The concept of image 'fuzziness' is introduced and contrasted against attained image resolution.
Echographic Signal Processing : A Method Taking Into Account The Medium Geometry
A. Herment, P. Peronneau, J . P. Moutet
A method for the treatment of the R-F signals obtained by contemporary echographic systems is described. The process is able to take into account some geometric characteristics of interfaces of the examined medium during the deconvolution of received echoes. A finer characterization of the tissues, especially by their acoustic impedance, may thus be expected. This paper describes the principles and the implementation of the method, and precises its performances. The ability of the method to process actual echographic signals is illustrated by two examples of application on the eye.
Metaphase Selection By Analysis Of The Fourier Transformed Image
P. Hutzler, K. Stettmaier, H. Brettel
Automatic metaphase finding may be done by analysing the images of the chromosomes in the image plane or by investigation of their Fourier spectrum. We found that both digital Fourier transformation and optical Fourier processing of these objects give results which are in good agreement.
Linear And Nonlinear Feature Extraction Methods Applied To Automatic Classification Of Proliferating Cells
W. Hobel, W. Abmayr, S. J. Poppl, et al.
Numerical experiments were performed to find optimum feature extraction procedures for the classification of mouse L-fibroblasts into Gl, S and G2 subpopulations. From images of these cells different feature sets such as geometric, densitometric, textural and chromatin features were derived which served as data base for the numerical experiments. Linear and nonlinear supervised stepwise learning techniques for the discrimination of the cells into Gl, S and G2 were performed. The classification error was used as criterion for the evaluation of the different numerical feature selection methods. Optimum results were obtained by combining distance based feature selection methods with nonlinear discriminant analysis. The successive solution of 2-class problems improves the results compared to the solution of the 3-class problem. Linear discriminant analysis then may surpass quadratic discriminant analysis.
Metaphase Finding Using A Fast Interval Processer, Metafip
Geoffrey Shippey, Royston Bayley, Erik Granum
METAFIP is an image analysis system designed to find cells in metaphase state on a microscope slide for subsequent recall, and eventually to rank them in accordance with some quality criteria. It is based on the Fast Interval Processor (FIP), a high speed image analyser, which scans the image using a linear charge coupled diode array. Slide scan rates of tsq mm./sec. have been achieved without saturating the computer. An 8MHz stream of pixel density values is reduced to a data rate which can be digested in a fast minicomputer using interval coding. The paper describes how the interval parameters (or features) extracted by the hard-wired preprocesser are used by the metaphase finding algorithms. Results are given showing encouraging discrimination between metaphase and nonmetaphase cells on a variety of materials.
Modified Voronoi Diagram And Relative Neighbors On A Digitized Picture And Their Applications To Tissue Image Analysis
Jun-Ichiro Toriwaki, Kenji Mase, Yoshiyuki Yashima, et al.
The Dirichlet tessellation (DT), the Voronoi diagram (VD) and the relative neighbor (RN) on a digitized picture plane are presented with their applications to image processing. In the past the DT and the VD were defined for a finite point set in the continuous space. Here the modified digital Dirichlet tessellation (MDDT), the modified digital Voronoi diagram (MDVD) and the modified digital relative neighbor (MDRN) are newly introduced as the counterparts of the DT, the VD and the RN defined for a finite set of connected components in a digitized binary picture. Two algorithms - a sequential type and a parallel type are given to obtain the MDDT and the MDVD. Experimental results are shown concerning applications of the MDDT, the MDVD, and the MDRN to image processing, which include texture analysis of microscopic images of pathological samples and region devision of chest x-ray images.
DISTANCE MEASURES OF DISTRIBUTIONS AND CLASSI-FICATION ORIENTED FEATURE SELECTION
S. J. Poppl
Distance measures of distributions are often used to estimate upper and lower bounds on the probabilities of misclassification. Sharp lower and upper bounds are of great importance for feature selection, that means for classification oriented feature interpretation. MATUSITA affinity/6/ gives sharp upper bounds, the divergence /4/ lower bounds on the probabilities of misclassification. This paper discusses the properties of these two distance measures. Other measure are compared at length in /9/.
The Data Analysis In Epidemiology: A Methodology To Reduce Complexity
F. Esposito
The paper represents a trial of defining and testing a systematic methodology to study medical phenomena in which large sets of variables of an essentially qualitative kind are envolved. The methodology con sists in the organized application of the various methods of the Data Analysis and the Multivariate Statistics to the epidemiology in order to analyze phenomena very complex as to the numerousness and variety of the variables, to evaluate the environment influence, to understand the phenomenon evolution. The data analysis techniques play the role both of tools for synthesizing and validating indexes and of methods for formulating models of the disease propagation, by defining the prognostic function. The methodology is applied to define the epidemiological model of the Essential Arterial Hypertension.
Profi-11: A Simple Dialog Language For The Processing Of Image Sequences
M . Bohm, G C. Nicolae, K. H. Hohne
Applications of digital image processing in medicine have grown in the past five years at a tremendous rate. Many hardware systems have been developed by research groups as well as by commercial manufacturers. During the initial phase of evolution efforts were mainly focused on the development of new hardware systems, which were run using existing system and application programming technologies. As early as 19741 it was recognized in our working group that image processing systems required specially tailored software environments which offer fast, flexible and interactive access to the new types of data objects (e.g. images, image sequences) and the operations defined on them. We, therefore, developed an experimental tool for the generation of interactive image processing systems XDL2'3 with which the arising problems could be studied. In 1977 we felt the need for having a production version of a dialog system for our applications in medical image sequence processing4,5. We decided to design and implement a compact image processing tool parallel to the ongoing research on the experimental system 6 . This language called PROFI-11 (Processing and Retrieval of Functional Images) has been in routine operation since the end of 1977. The requirements, the language features and the experiences gained in four years operation are described in this paper. Recently several image processing languages have been described which are essentially extensions of existing languages'. With PROFI-11 we have choosen a different approach, as will be seen from the following sections.
GOP, A Paradigm In Hierarchical Image Processing
G. H. Granlund, J. Arvidsson, H. Knutsson
The GOP image computer Consists of an algorithmic structure as well as a hardware structure. In the algorithmic structure is employed a new type of information representation, which allows a high degree of data compression. It includes a class of context-sensitive symmetry operations, which have proved very powerful for processing of image information. These features together allow processing in a hierarchical structure, which has proved necessary for effective processing. The hardware structure allows implementation of this class of algorithms, as well as most other image processing operations suggested. The processor has a two-stage architecture, where the first stage has a power corresponding to one billion operations per second, providing a considerable data reduction. The second part allows a flexible combination of partial results at floating point accuracy.
New Suggestions For Image Parallel Architecture
Per-Erik Danielsson
Image parallel architecture is defined as SIMD machines with bit-serial processors. An important class of operations in image processing are neighborhood operations. The so called distributed processor topology supported by fast direct data paths and an efficient control unit is presented as the best possible solution for the neighborhood access problem. Table look-up is another feature that should be provided each processor to enhance histogramming, feature counts, arbitrary gray-scale transforms, logic operations etc. The ideas presented are only a part of a more complete study of an image parallel processor architecture.
Construction Of V A P , A Video Array Processor, Guided By Some Applications Of Biomedical Image Analysis.
A. Favre, Hj. Keller, A. Comazzi
Enlarging the range of pictures amenable to automatic image analysis and speeding up image preprocessing were the motives leading to the construction of VAP. Its main features were derived from simulation studies on biological electron micrographs: flexibility in the choice of the transformations which must include nonlinear pixel operations, high speed execution, and an interactive environment for programming. The VAP hardware uses a tree-like structure with look-up tables to calculate a new value for a pixel from the input values of various selectable neighbour pixels. Several biomedical applications are discussed.
Processor-Oriented Methods For Quantitative Image Analysis Applied To Cytology And Radiology
E. R. Reinhardt
For wide spread applications of image analysis procedures, methods are required which can be systematically adapted to other problems. Regarding expense and effectivity, the analysis of different image material should be based on the same processing modules. This purpose requires special picture evaluation systems (hard- and software components). Processor-oriented analysis procedures which satisfy these requirements will be presen-ted and their efficiency will be demonstrated by experimental results.
Lung Function Measurement By Optical Contouring
A R Gourlay, G Kaye, D M Denison, et al.
The use of an optical contouring method for lung function testing is discussed and compared with current techniques. Possible advantages of the optical method are reviewed.
Functional Nuclear Imaging Of Respiratory Dynamics
Earl E. Gose, Thomas Milo, W.Earl Barnes, et al.
A technique has been developed for producing sequential images of respiratory dynamics without physiological gating. Features from these image sequences have been used to produce new types of functional images which illustrate the regional physiology and anatomy of the lung.
Computer Analysis Of ILO Standard Chest Radiographs Of Pneumoconiosis
C. C. Li, David B. C. Shu, H. T. Tai, et al.
This paper presents study of computer analysis of the 1980 ILO standard chest radiographs of pneumoconiosis. Algorithms developed for detection of individual small rounded and irregular opacities have been experimented and evaluated on these standard radiographs. The density, shape, and size distribution of the detected objects in the lung field, in spite of false positives, can be used as indicators for the beginning of pneumoconiosis. This approach is potentially useful in computer-assisted screening and early detection process where the annual chest radiograph of each worker is compared with his (her) own normal radiograph obtained previously.
Three-Dimensional Reconstruction Of The Blood Vessels Of The Brain
P. Suetens, A. Oosterlinck, A. Haegemans, et al.
The possibility of three-dimensional recon-struction of the cerebral blood vessels has been investigated. In the first place, reconstruction from two orthogonal subtraction angiograms has been explored. Secondly, we have applied an automatic image matching algorithm to a stereo-pair of subtraction angiograms. A third approach - we term it computerized traditional tomography - uses the geometry of traditional tomography, but instead of radiographic film, a digital radiography device has to be used to store the projections.
Pattern Recognition Of Blood Vessel Networks In Ocular Fundus Images
K. Akita, H. Kuga
We propose a computer method of recognizing blood vessel networks in color ocular fundus images which are used in the mass diagnosis of adult diseases such as hypertension and diabetes. A line detection algorithm is applied to extract the blood vessels, and the skeleton patterns of them are made to analyze and describe their structures. The recognition of line segments of arteries and/or veins in the vessel networks consists of three stages. First, a few segments which satisfy a certain constraint are picked up and discriminated as arteries or veins. This is the initial labeling. Then the remaining unknown ones are labeled by utilizing the physical level knowledge. We propose two schemes for this stage : a deterministic labeling and a probabilistic relaxation labeling. Finally the label of each line segment is checked so as to minimize the total number of labeling contradictions. Some experimental results are also presented.
Application Of Decision-Making Methodology To Certificate Of Need Applications For CT Scanners
Hans W. Gottinger, P. Shapiro
This paper describes a case study and application of decision-making methodology to two competing Certificate of Need (CON) Applications for CT body scanners. We demonstrate the use of decision-making methodology by evaluating the CON applications. Explicit value judgements reflecting the monetary equivalent of the different categories of benefit are introduced to facilitate this comparison. The difference between the benefits (measured in monetary terms) and costs is called the net social value. Any alternative with positive net social value is judged economically justifiable, and the alternative with the greatest net social value is judged the most attractive.
Imaging Advances In Federally Funded Programs
Roger S. Powell
Various Federal agencies of the United States Government have provided extensive funding support for the development of modern medical diagnostic imaging systems and for the computer technology which has made them possible. Representative images are shown which have been produced by imaging systems developed under contract and grant programs funded by the National Institutes of Health. This research was carried out by investigators in universities, hospitals, and industry. The examples are selected from research in radiology, nuclear medicine, ultrasound, and nuclear magnetic resonance imaging.
Nonlinear Parameter Tomography, Active Incoherent Imaging And Adaptive Imaging For Ultrasonic Tissue Characterization
Takuso Sato, Makoto Hirama, Takayoshi Yokota, et al.
In this paper, three new ultrasonic iaginc, techniques aiming at more exact characterization of tissue are presented. First, new ton ographic imaging system of nonlinear parameter B/A is proposed. Scanning low frequency pumping waves and high frequency probing waves are used. Several images of biological objects and their interpretation are given. Second, a method of active incoherent imaging which is effective both for diffusive and specular objects is shown. A scanning transmitter and array reciever and 2-D nonlinear spectral estimation are used. Finally, an active adaptive imaging method through an unknown inhomogeneous layer is given. It is based on the eigen vector decomposition of the recieved data matrix. The degree of focusing is judged and the best beam is formed. Experimental results are shown as well as numerical results.
Medical Imaging By Various Modes Of Ultrasound Computerized Tomography
H. Ermert, D. Hiller
Computerized Tomography (CT) using ultrasonic waves is possible in various modes, the application of CT-algorithms leads to cross-sectional distributions of several physical parameters. These images promise progress in tissue characterization of small organs, for example in tumour diagnosis of the female breast and of the male testicles.
A Perspective Projection Algorithm With Fast Evaluation Of Visibility For Discrete Three-Dimensional Scenes
H. Oswald, W. Kropatsch, F. Leberl
We have developed a three-dimensional display algorithm for discrete objects in a discrete space. Objects may be concave or convex, have holes (including interior holes) and can consist of disjoint parts. The input for our algorithm is a binary scene derived from a consecutive set of segmented and interpolated computed tomography (CT) cross-sections. The output is a shaded three-dimensional appearance of an organ from a user specified viewing direction.
A Database System Of Microscopic Cell Images
Naokazu Yokoya, Hideyuki Tamura
This paper describes a prototype of an image database system which manages both pathological microscopic cell images and symbolic data as secondary information of the images. The database physically consists of three kinds of files; (i) image files, (ii) text files in which medical doctor's diagnostic comments on each image are stored, and (iii) relational tables which contain information about both images and diagnoses. It is designed both for quick-looking by medical doctors and cytoscreeners and for evaluating image analysis algorithms in automated cytology. Major tasks supported by the system are as follows: (a) image and symbolic data retrieval, (b) image editing, and (c) image analysis. Additional tasks such as digital imaging and storage media conversion are also supported. Most of those tasks are executed by means of a command-oriented query language, with which a user can interactively get access to the database.
Digital Picture Archiving And Communication Systems (PACS) In Diagnostic Radiology: A Review
Andre J. Duerinckx
The concept of a Picture Archiving and Communication System (PACS) is defined. Existing and proposed Analog and Digital PAC Systems are desc ribed and compared in view of today's trend toward "Filmless Radiology". Rather than a "Filmless" Radiology department, we predict a Radiology depart-ment "with less film" only. It is shown how Digital PACS can help bridge the gap between analog film file rooms and filmless radiology data bases.
Intergrated ECG Data Base For Epidemiological Studies
R. Engelbrecht, S. J. Poeppl, H. Schubel, et al.
Data base technology has been introduced into the field of medical data processing. The functions of database management systems (DBMS) have several relationships to the requirements for storage, retrieval, and update of data in health-related studies. These relationships are discussed and demonstrated in the data management of an epidemiological study - the Munich Blood Pressure Study (MBS) where the commercially available DBMS ADABAS is used. The scope of data items range from personal information over the medical history and laboratory results to data of an electro-cardiogram and its automatic analysis.
An Interactive Method Of Teaching Blood Cell Identification: Evaluating The System
Joan N. McHam, Arthur I. Karshmer, Michael Shaw
The task of teaching medical personnel how to identify blood cells is complicated by several factors relating to the type of teaching equipment required and the amount of teacher-student interaction available in the traditional teaching environment. Equipment such as 'double headed' microscopes and slide projectors have been used in the classroom as interactive teaching tools, but are of limited value as they require the presence of an instructor during the basic phase of learning blood cell identification. Textbooks and manuals augment this process in a non-interactive fashion. A student can therefore expect a rather small number of 'interactive' hours of instruction during a normal course in blood cell identification.
Evaluation Of Medical Technology
David F. Preston, Samuel J. Dwyer III, Larry T. Cook, et al.
Shannon's mutual information function is useful in modeling clinical decisions. The equivocation function determines useful bounds on the average probability of error. A model for medical diagnosis using Shannon's uncertainty function is demonstrated. An example is provided to illustrate the importance of the equivocation function.
Computer Assisted Scanning Microscopy In Cytology
E. Bengtsson, B. Dahlqvist, O. Eriksson, et al.
The design of an appropriate scanning and digitization equipment for an image analysis system involves many important trade-offs between image quality, scanning speed and system complexity. In the field of cytometry and automated cytology several different kinds of scanners have been proposed and constructed by various research groups. In this paper the basic physical and technical factors that limit the achievable scanner performance are examined and some examples of recent scanner designs are discussed.
Objective Image Evaluation Of Scintillation Camera Fields
P. T. Cahill, R.J . R . Knowles
Objective formulations of scintillation camera specifications are necessary for purchasing, acceptance testing, quality assurance, and intercomparison of imaging results. Objective formulations have been successfully implemented with respect to spatial resolution but not with respect to field uniformity. This lag has resulted from a failure to recognize the importance of higher order statistical contributions in addition to first order terms incorporated in customary quantitative indices of integral and differential uniformity. In addition to re-analyzing first order variations, we have introducted SGLDM parameters to specify global texture and analyzed six local texture parameters of which the local gradient was the most predicative of nonuniformity. Moreover, we have examined the effects of matrix renormalization on both first order terms and texture (global and local). Lastly, simulations of camera fields incorporating configurations of phototubes reveal that systematic and statistical errors can be combined to yield the observed response of existing scintillation cameras.
Splines Interpolation For Image Reconstruction
Zhongquan Wu
In this paper, an interpolation method based on B-spline functions is used in image reconstruction. First, an elementary review of B-spline function interpolation theory is given. Next, the influence of the boundary conditions assumed here on the interpolation of filtered projections and also on the image reconstruction is discussed. It is shown that this boundary condition has almost no influence on the image in the central region of the image space, because the error of the interpolation decreases rapidly - indeed, by a factor of ten in shifting two pixels from the edge toward the center. The implementation results show that the computational cost for the interpolation using this algorithm is about one-tenth that of the same subjective and objective fidelity as the conventional algorithm.
A New Algorithm For 3D Ventricular Reconstruction From Orthogonal Projections
S. Alliney, F. Sgallari
3D left ventricule reconstruction without geometrical model assumption is performed using two orthogonal projections. The bending is assumed to be locally similar at any ventricular surface point, both in horizontal and vertical planes. In this way, for any horizontal slice we have: (a) the overall dimensions; (b) a good estimate of the curvature radii. The 3D reconstruction problem is reduced to a constrained optimization problem for each horizontal section. A numerical solution is obtained by an iterative procedure based on a descent algorithm.
Image Reconstruction By Maximum Entropy
M. C. Kemp, P. H. Jarritt
The maximum entropy method (ME) in the form proposed by Gull and Daniell1 is a powerful and general method for reconstructing, from incomplete and noisy data, an image which takes noise into account and gives smooth, positive reconstructions with a resolution automatically determined by the quality of the data. The method is described and examples from tomography and image deconvolution are used to illustrate its properties.
Limited Angle CT Reconstruction Using A Priori Information
Kenneth M. Hanson
Projection data that are limited in number and range of viewing angle cannot completely specify an arbitrary source function. In the space of all permissible functions there exists a null subspace about which the projection measurements provide no information. Deterministic reconstruction algorithms usually set the null space contributions to zero leading to severe reconstruction artifacts. A Fit And Iterative Reconstruction (FAIR) method is proposed that incorporates a priori knowledge of the approximate functional form of the source. In FAIR the parameters of this functional model are determined from the available projection data by a weighted fitting procedure. The resulting distribution is then iteratively revised to bring the final estimate into agreement with the measured projections using a standard algorithm such as ART.
Image Reconstruction From Incomplete Projections
Hidemitsu Ogawa
We propose a series expansion method for image reconstruction from projections which is equivalent to the Fourier transform method. The reconstruction formula is applicable to incomplete projection data such as limited angle projection data and restricted region scan data. It is also applied not only to pencil beam and fan-beam scanners but also to any different type of CT scanners, for instance, a fan-beam scanner with a linear motion of x-ray source. The reconstruction technique, therefore, provides a direct method to restore any sectional image of a three-dimensional body.
Computer Imaging As A Creative Tool For Medical Researchers
T. Kaminuma, I. Suzuki
A general purpose interactive pattern anal-sys and image display system has been developed for medical applications. The system consists of a PDP 11/70 under RSX 11/M operating system, a drum scanner and a TV camera input subsystems, an interactive color image display, and some other special peripherals. It can process nearly all types of patterns that we encounter in medical research except dynamic image sequence such as cineangiograms. The system has been developed through versatile requirement from clinical collaborators. After the usefulness of the system for the image analysis was approved, the system has been extended to geo-health data display in an epidemiological study and also to molecular imagery in pharmacological applications. Based on these past experience a revised plan of the system, which will overcome the shortcoming of the present system is on going.
Tuning Of An Interactive Software System For Image Analysis To Quantitative Microscopy.
Eriksson O ., Bengtsson E., Jarkrans T. M ., et al.
In this paper a general purpose, modular and expandable software image analysis system, ILIAD, is described. The system uses a high level language as command language. It is expandable through the use of procedures, which may call each other. An hierarchical tree structure of procedures may be created, to tune the system, when used in applications. A description of how the ILIAD system can be tuned to be used in collecting images from an automated microscope is given. Some of the lower level procedures are described and the multilevel structure is outlined.
RIFRAN : Interactive Fringe Pattern Analysis And Processing System
Masanor Idesawa, Toyohiko Yatagai
Fringe pattern such as interference fringe or moire fringe have been used widely to measure a surface shape, deformation, displacement and so on without contact. However, quantitative analysis of fringe pattern is not always so easy. In this paper, the authors developed two types of interactive fringe pattern analysis systems. One of them is called RIFRAN I in which image processing technique is applied and sectional shape type model is build up semi-automatically. Another one is called RIFRAN II in which tablet digitizer is provided for input of fringe positions and contour line type model is constructed. Furthermore, 3-D model processing systems are developed for proceeding analysis. Key Words: Fringe pattern, Interactive input, 3-D shape model, Sectional shape, Contour line, Quantitative analysis
The Masking Effect Of Fluoroscopic X-Ray Noise As A Function Of Unsharpness And Intensity And The Viewing Distance
P.J. 't Hoen
The masking effect is expressed in terms of the visual threshold contrast of bars and disks, A very sensitive and fast psychometric method is developed for the measurement of the threshold contrast. Substantial variations of the unsharpness have relatively small influence. This may well be predicted by the noise equivalent aperture concept provided that a visual system unsharpness is also taken into account. This unsharpness may correspond to the newly developed1. visual system modulation transfer function which gives good predictions for the influence of the unsharpness of the object on the visibility. The threshold appears to be inversely proportional to the square root of the X-ray exposure rate, as is predicted by models based on the statistical variation of the noise. As apparently the perceived unsharpness is not very important, the masking effect is correctly predicted to be inversely proportional to the viewing distance.
Optimizing The Digital Radiographic Image By Means Of A Dynamic Phantom And Animal Studies
P. T. Cahill, R.J. R. Knowles, B.C. P. Lee, et al.
The quality of digital radiographic images cannot be judged subjectively by visual inspection, nor by the currently available static phantoms used for conventional radiographic equipment. In this study, a quasistatic phantom was developed to maximize the mapping of contrast differences into available gray levels while obtaining maximum resolution. Flow patterns were evaluated in dynamic phantoms of varying rigidity and in canine arteries, by a series of pulsed contrast bolus injections at various flow rates. The effect of stenoses in the phantoms and canine vessels on the flow pattern were also studied. Our results indicate that it is possible to determine the velocity of flow by these methods, and that the inherent elastic properties of the flexible phantom and arteries rather than the degree of stenoses is responsible for modification of the flow pattern.
Gray Tone Image Quality Evaluation Criterion Based On Fuzzy Measures
S. D. Bedrosian, Wei-Xin Xie
A practical application of fuzzy set theory in image processing is introduced. The fuzzy measures of gray-tone band images provide theoretical justification for often quoted experimental data. Also significant for image interpretation when processing artifacts have been minimized. A new two dimensional monochrome image quality evaluation criterion is presented based on fuzzy measures.
Computer Generated Holograms In Medicine
Robert J. Perlmutter, Stephen S. Friedland
Various diagnostic and therapeutic procedures should be considerably enhanced by the presentation of the data from a CT,NMR,PET or ultrasonic scan in holographic form. Initially we demonstrated a holographic presentation using a multiplex white light hologram formed from many x-rays of a cadavre's hand. We have explored various methods of computer generating a hologram and in this paper we report on our method of calculating a Fresnel hologram produced by the superposition of the Fresnel pattern from the individual planes of the CT scan.The limitations of the computer generation of holograms with respect to present and anticipated computer power and recording techniques are presented.
Computer Aided Prosthetic Implant Manufacturing Using CT Image Data
Michael L. Rhodes, William V. Glenn Jr., John F. Quinn, et al.
A system is described that delivers three-dimensional shape to plan corrective surgery and directly manufacture prosthetic implants. Geometry of implants are manufactured to precise dimensions using CT image data and an algorithm to generate instructions for numerically controlled milling machines. This work extends previous work in CT image segmentation algorithms to wed structure contour data to machinery used for making special implants. The types of implants currently available come in only a few sizes. A combination of changes to both the standard implant and, unfortunately to the femur itself are required to have a firm, stable reconstruction. The system out-lined here minimizes the removal of patient skeletal mass by manufacturing implants customized for each patient. The system is interconnected via digital transmission lines and will become entirely automated. Example implants are shown and new application areas are presented.
An Improved Stereotactic System For CT Aided Neurosurgery
Michael L. Rhodes, William V. Glenn, Jr., Yu-Ming Azzawi, et al.
Several computed tomography (CT) aided stereotactic systems have been introduced during the last five years for precise placement of neurosurgical instruments. Using digital CT image data that is transformed to a patient-frame coordinate system surgery can be simulated, planned and executed with sub-millimeter precision. This paper introduces a second generation stereotactic system that improves on speed, image resolution, accuracy and patient comfort of past and current systems. The system described here is designed for surgical procedures conducted entirely in the CT suite. Geometric resolution of this system is presented, test procedures are described and phantom results are discussed. An application to percutaneous knee surgery is briefly mentioned. At this writing patient data is not yet available.
A Simple Feasibility Demonstration Of A Local Area Network For A Digital Imaging Department
S. C. Horii, M. P. Zeleznik, G. Q . Maguire Jr., et al.
This paper will discuss a unified digital image distribution and processing system linking various digital image sources through a broadband local area network and a common image format. The system allows for viewing and processing of all digital images produced within the complex, and for viewing stations at any number of convenient locations. The physical handling of storage media at image sources will be almost eliminated when complete archiving, file maintenance and large scale processing capabilities are provided by a central file server. This paper presents a concrete proposal for an initial system which uses a commercial nuclear medicine computer system as the initial viewing station, file server and archiving facility for permanently storing and selectively viewing computed tomography (CT) and nuclear medicine (NM) images. The system proposed can then be slowly expanded to include all the digital images produced by the radiology department, and ultimately to include all the images by digitizing those produced in an analog fashion.