Show all abstracts
View Session
- Soft Computing in Remote Sensing Applications
- Applications I
- Applications of Neural Networks, Fuzzy Systems, and Evolutionary Computations in Electronic CAD
- Soft Computing at Physical Optics Corporation
- Applications II
- Applications III
- Poster Session
- Applications I
- Poster Session
Soft Computing in Remote Sensing Applications
Phase unwrapping as an ill-posed problem: performance comparison between a neural-network-based approach and a stochastic search method
Show abstract
2D phase unwrapping, a problem common to signal processing, optics, and interferometric radar topographic applications, consists in retrieving an absolute phase field from principal, noisy measurements. In this paper, we analyze the application of neural networks to this complex mathematical problem, formulating it as a learning-by-examples strategy, by training a multilayer perceptron to associate a proper correction pattern to the principal phase gradient configuration on local window. In spite of the high dimensionality of this problem the proposed MLP, trained on examples from simulated phase surfaces, shows to be able to correctly remove more than half the original number of pointlike inconsistencies on real noisy interferograms. Better efficiencies could be achieved by enlarging the processing window size, so as to exploit a greater amount of information. By pushing further this change of perspective, one passes from a local to a global point of view; problems of this kind are more effectively solved, rather than through learning strategies, by minimization procedures, for which we prose a powerful algorithm, based on a stochastic approach.
Image segmentation with scatter-partitioning RBF networks: a feasibility study
Andrea Baraldi
Show abstract
Scatter-partitioning Radian Basis Function (RBF) networks increase their number of degrees of freedom with the complexity of an input-output mapping to be estimated on the basis of a supervised training data set. Among scatter-partitioning RBF networks found in the literature, a Gaussian RBF model, termed supervised growing neural gas (SGNG), is selected due to its superior expressive power. SGNG employs a one-stage error-driven learning strategy and is capable of generating and removing both hidden units and synaptic connections. A slightly modified SGNG version is tested as a function estimator when the training surfaces to be fitted is an image, i.e., a 2D signal whose size is finite. The relationship between the generation, by the learning system, of disjointed maps of hidden units and the presence, in the image, of pictorially homogenous subsets is investigated. Unfortunately, the examined SGNG version performs poorly both as function estimator and image segmenter. This may be due to a intrinsic inadequacy of the one-stage error-driven learning strategy to adjust structural parameters and output weights simultaneously but consistently. As a possible remedy, in the framework of RBF networks the combination of a two-stage error-driven learning strategy with synapse generation and removal criteria should be further investigated.
Soft classifications for the mapping of land cover from remotely sensed data
Show abstract
Image classification used in mapping land cover form remotely sensed data are frequently described as being 'hard' of 'soft' yet in reality such a simple distinction is not observed and a continuum of classification softness can be defined. Using airborne sensors or imagery of two test sites in South Wales, classifications at different points along this continuum with a feedforward neural network are illustrated. It is shown that soft classification can provide a better and more accurate representation of both discrete and continuous land cover classes, resolving in particular problems associated with mixed pixels. Classifications produced at different positions along the continuum of classification softness, however, differed markedly in the representation of land cover distribution and accuracy, highlighting the need to recognize the existence of the continuum and its implications for land cover mapping from remotely sensed data. The results also highlight that the use of a soft or fuzzy classifier is only a partial solution to the mixed pixel problem; a full solution requires refinement of the training and testing stages and methods for this are discussed. Despite an ability to accommodate for the effects of mixed pixels on each of the three stages of supervised image classifications, other factors can degraded classification quality. One important issue is the presence of untrained classes. It is hon, however, that the effect of untrained classes can be reduced with the use of additional information on the typicality of class membership that can be derived form some soft classifications.
Parallel genetic algorithm for the design of neural networks: an application to the classification of remotely sensed data
Show abstract
We consider the problem of classification of remote sensed data from LANDSAT Thematic Mapper images. The data have been acquired in July 1986 on an area locate din South Italy. We compare the performance obtained by feed-forward neural networks designed by a parallel genetic algorithm to determine their topology with the ones obtained by means of a multi-layer perceptron trained with Back Propagation learning rule. The parallel genetic algorithm, implemented on the APE100/Quadrics platform, is based on the coding scheme recently proposed by Sternieri and Anelli and exploits a recently proposed environment for genetic algorithms on Quadrics, called AGAPE. The SASIMD architecture of Quadrics forces the chromosome representation. The coding scheme provides that the connections weights of the neural network are organized as a floating point string. The parallelization scheme adopted is the elitistic coarse grained stepping stone model, with migration occurring only towards neighboring processors. The fitness function depends on the mean square error.After fixing the total number of individuals and running the algorithm on Quadrics architectures with different number of processors, the proposed parallel genetic algorithm displayed a superlinear speedup. We report results obtained on a data set made of 1400 patterns.
Detailed comparison of neuro-fuzzy estimation of subpixel land-cover composition from remotely sensed data
Show abstract
Mixed pixels, which do not follow a known statistical distribution that could be parameterized, are a major source of inconvenience in classification of remote sensing images. This paper reports on an experimental study designed for the in-depth investigation of how and why two neuro-fuzzy classification schemes, whose properties are complementary, estimate sub-pixel land cover composition from remotely sensed data. The first classifier is based on the fuzzy multilayer perceptron proposed by Pal and Mitra: the second classifier consists of a two-stage hybrid (TSH) learning scheme whose unsupervised first stage is based on the fully self- organizing simplified adaptive resonance theory clustering network proposed by Baraldi. Results of the two neuro-fuzzy classifiers are assessed by means of specific evaluation tools designed to extend conventional descriptive and analytical statistical estimators to the case of multi-membership in classes. When a synthetic data set consisting of pure and mixed pixels is processed by the two neuro-fuzzy classifiers, experimental result show that: i) the two neuro- fuzzy classifiers perform better than the traditional MLP; ii) classification accuracies of the two neuro-fuzzy classifiers are comparable; and iii) the TSH classifier requires to train less background knowledge than FMLP.
Applications I
Evaluating alternative forms of crossover in evolutionary computation on linear systems of equations
Show abstract
Experiments are conducted to assess the utility of alternative crossover operators within a framework of evolutionary computation. Systems of linear equations are used for testing the efficiency of one-point, two-point, and uniform crossover. The results indicate that uniform crossover, which disrupts building blocks maximally, generates statistically significantly better solutions than one- or two-point crossover. Moreover, for the cases of small population sizes, crossing over existing solutions with completely random solutions can perform as well or better than the traditional one- and two-point operators.
Soft computing applications: the advent of hybrid systems
Show abstract
Soft computing is a new field of computer sciences that deals with the integration of problem- solving technologies such as fuzzy logic, probabilistic reasoning, neural networks, and genetic algorithms. Each of these technologies provide us with complementary reasoning and searching methods to solve complex, real-world problems. We will analyze some of the most synergistic combinations of self computing technologies, with an emphasis on the development of smart algorithm-controllers, such as the use of FL to control GAs and NNs parameters. We will also discuss the application of GAs to evolve NNs or tune FL controllers; and the implementation of FL controllers as NNs tuned by backpropagation-type algorithms. We will conclude with a detailed description of a GA-tuned fuzzy controller to implement a train handling control.
Fuzzy blood pressure measurement
Antonino Cuce,
Mario Di Guardo,
Gaetano Sicurella
Show abstract
In this paper, an intelligent system for blood pressure measurement is posed together with a possible implementation using an eight bit fuzzy processor. The system can automatically determine the ideal cuff inflation level eliminating the discomfort and misreading caused by incorrect cuff inflation. Using statistics distribution of the systolic and diastolic blood pressure, in the inflation phase, a fuzzy rule system determine the pressure levels at which checking the presence of heart beat in order to exceed the systolic pressure with the minimum gap. The heart beats, characterized through pressure variations, are recognized by a fuzzy classifier.
Applications of Neural Networks, Fuzzy Systems, and Evolutionary Computations in Electronic CAD
Evolutionary algorithms, simulated annealing, and Tabu search: a comparative study
Show abstract
Evolutionary algorithms, simulated annealing (SA), and Tabu Search (TS) are general iterative algorithms for combinatorial optimization. The term evolutionary algorithm is used to refer to any probabilistic algorithm whose design is inspired by evolutionary mechanisms found in biological species. Most widely known algorithms of this category are Genetic Algorithms (GA). GA, SA, and TS have been found to be very effective and robust in solving numerous problems from a wide range of application domains.Furthermore, they are even suitable for ill-posed problems where some of the parameters are not known before hand. These properties are lacking in all traditional optimization techniques. In this paper we perform a comparative study among GA, SA, and TS. These algorithms have many similarities, but they also possess distinctive features, mainly in their strategies for searching the solution state space. the three heuristics are applied on the same optimization problem and compared with respect to (1) quality of the best solution identified by each heuristic, (2) progress of the search from initial solution(s) until stopping criteria are met, (3) the progress of the cost of the best solution as a function of time, and (4) the number of solutions found at successive intervals of the cost function. The benchmark problem was is the floorplanning of very large scale integrated circuits. This is a hard multi-criteria optimization problem. Fuzzy logic is used to combine all objective criteria into a single fuzzy evaluation function, which is then used to rate competing solutions.
Application of evolutionary computation in ECAD problems
Dae-Hyun Lee,
Seung Ho Hwang
Show abstract
Design of modern electronic system is a complicated task which demands the use of computer- aided design (CAD) tools. Since a lot of problems in ECAD are combinatorial optimization problems, evolutionary computations such as genetic algorithms and evolutionary programming have been widely employed to solve those problems. We have applied evolutionary computation techniques to solve ECAD problems such as technology mapping, microcode-bit optimization, data path ordering and peak power estimation, where their benefits are well observed. This paper presents experiences and discusses issues in those applications.
Perturbation method for probabilistic search for the traveling salesperson problem
James P. Cohoon,
John E. Karro,
Worthy N. Martin,
et al.
Show abstract
The Traveling Salesperson Problem (TSP), is an MP-complete combinatorial optimization problem of substantial importance in many scheduling applications. Here we show the viability of SPAN, a hybrid approach to solving the TSP that incorporates a perturbation method applied to a classic heuristic in the overall context of a probabilistic search control strategy. In particular, the heuristic for the TSP is based on the minimal spanning tree of the city locations, the perturbation method is a simple modification of the city locations, and the control strategy is a genetic algorithm (GA). The crucial concept here is that the perturbation of the problem allows variant solutions to be generated by the heuristic and applied to the original problem, thus providing the GA with capabilities for both exploration in its search process. We demonstrate that SPAN outperforms, with regard to solution quality, one of the best GA system reported in the literature.
Experiences in the use of evolutionary techniques for testing digital circuits
Fulvio Corno,
Maurizio Rebaudengo,
Matteo Sonza Reorda
Show abstract
The generation of test patterns for sequential circuits is one of the most challenging problems arising in the field of Computer-Aided Design for VLSI circuits. In the past decade, Genetic Algorithms have been deeply investigated as a possible approach: several algorithms have been described, and significant improvements have been proposed with respect to their original versions. As a result, GA-based test pattern generators can now effectively compete with other methods, such as topological or symbolic ones. This paper discusses the advantages and disadvantages of GA-based approaches and describes GATTO, a state-of-the-art GA-based test pattern generator. Other algorithms belonging to the same category are outlined as well. The paper puts GATTO and other GA-based tools in perspective, and shows that Evolutionary computation techniques can successfully compete with more traditional approaches, or be integrated with them.
Built-in self-repair of VLSI memories employing neural nets
Pinaki Mazumder
Show abstract
The decades of the Eighties and the Nineties have witnessed the spectacular growth of VLSI technology, when the chip size has increased from a few hundred devices to a staggering multi-millon transistors. This trend is expected to continue as the CMOS feature size progresses towards the nanometric dimension of 100 nm and less. SIA roadmap projects that, where as the DRAM chips will integrate over 20 billion devices in the next millennium, the future microprocessors may incorporate over 100 million transistors on a single chip. As the VLSI chip size increase, the limited accessibility of circuit components poses great difficulty for external diagnosis and replacement in the presence of faulty components. For this reason, extensive work has been done in built-in self-test techniques, but little research is known concerning built-in self-repair. Moreover, the extra hardware introduced by conventional fault-tolerance techniques is also likely to become faulty, therefore causing the circuit to be useless. This research demonstrates the feasibility of implementing electronic neural networks as intelligent hardware for memory array repair. Most importantly, we show that the neural network control possesses a robust and degradable computing capability under various fault conditions. Overall, a yield analysis performed on 64K DRAM's shows that the yield can be improved from as low as 20 percent to near 99 percent due to the self-repair design, with overhead no more than 7 percent.
Uncertainty in fuzzy and partial logic in the context of electrical CAD
Areski Nait Abdallah,
Eugene B. Shragowitz
Show abstract
Uncertainty is a fundamental feature of the design process. Until the appearance of fuzzy logic only probabilistic models were available for dealing with uncertainty. In this paper an issue of uncertainty is considered based on fuzzy logic models and logic of partial information models - a new development in logic. The paper contains an overview of inference principles in fuzzy logic and partial logic as well as a classification of CAD system based on application of fuzzy logic. Examples of problem formulations in fuzzy logic and partial information logic are provided for the same problems and results are compared.
Soft Computing at Physical Optics Corporation
Ongoing applications of soft computing technologies to real-world problems at Physical Optics Corporation
Show abstract
Soft computing is a set of promising computational tools for solving problems that are inherently well solved by humans but not by standard computing means. This paper presents an overview of R and D activities at Physical Optics Corporation in the area of soft computing. The company has been involved in soft computing for over ten years, and has pioneered several soft-computing methodologies, including fuzzied genetic algorithms and neuro-fuzzy networks. Several practical implementations of soft computing are discussed.
Intelligent security system based on neuro-fuzzy multisensor data fusion
Show abstract
This paper presents a real-world application of neurofuzzy processing to a security system with multiple sensor. Integrating fuzzy logic with neural networks, the authors have automated the tasks of sensor data fusion and determination of false/true alarms, which currently rely solely on human monitoring operators, so that they operate in a way similar to human reasoning. This integrated security system includes a set of heterogeneous sensor. To take advantage of each sensor's strengths, they are positioned and integrated for side, accurate, economical coverage. The system includes real-time tracking cameras functioning as true digital motion detectors with the capability of approximating the size, direction, and number of intruders. The system is also capable of real-time image segmentation based on motion, and of image recognition based on neural networks.
Application of genetic algorithms to autopiloting in aerial combat simulation
Show abstract
An autopilot algorithm that controls a fighter aircraft in simulated aerial combat is presented. A fitness function, whose arguments are the control settings of the simulated fighter, is continuously maximized by a fuzzied genetic algorithm. Results are presented for one-to-one combat simulated on a personal computer. Generalization to many-to-many combat is discussed.
Soft computing channel optimization for compressed and raw video data stream
Show abstract
Compressed and raw video data streams are discussed in the context of soft computing channel optimization, for satellite and wireless communication. Digital video data are important, since they define high-bandwidth digital data communication requirements. It would be important for channel number maximization to reduce NTSC/VGA video data throughput to audio-like channel bandwidth levels. The soft computing channel optimization combines such diversified features as: data compression, type and degree of data correlation, type of data modulation, type of error-correcting codes, and systemic/physical sources of errors.
Video coding algorithm based on singularities reconstruction
Show abstract
This paper provides an analysis of PHysical Optics Corporation's video coding algorithm, which is based on mapping singularities reconstruction and its utilization in MPEG video flow. This algorithm can provide the object manipulation required by the MPEG-4 standard.
Singular manifold extraction as a novel approach to image decomposition
Show abstract
A novel approach to image decomposition is proposed, based on the theory of catastrophes. The singular manifold extraction is a finite source coding and compression. Various object- abstraction levels are applied to object manipulation and high-level body animation.
Novel hyperspectral video concept based on recent advances in soft computing and acousto-optic technology
Show abstract
In this paper, we propose an integration of two techniques: catastrophe-based image compression/coding, and hyperspectral video, based on acousto-optic technology. As a result, we obtain a joint tool for sensing and transmission of hyperspectral data, with minimized bandwidth and latency.
Color sensor and neural processor on one chip
Show abstract
Low-cost, compact, and robust color sensor that can operate in real-time under various environmental conditions can benefit many applications, including quality control, chemical sensing, food production, medical diagnostics, energy conservation, monitoring of hazardous waste, and recycling. Unfortunately, existing color sensor are either bulky and expensive or do not provide the required speed and accuracy. In this publication we describe the design of an accurate real-time color classification sensor, together with preprocessing and a subsequent neural network processor integrated on a single complementary metal oxide semiconductor (CMOS) integrated circuit. This one-chip sensor and information processor will be low in cost, robust, and mass-producible using standard commercial CMOS processes. The performance of the chip and the feasibility of its manufacturing is proven through computer simulations based on CMOS hardware parameters. Comparisons with competing methodologies show a significantly higher performance for our device.
Applications II
Fuzzy clustering and soft switching of linear regression models for reversible image compression
Show abstract
This paper describes an original application of fuzzy logic to reversible compression of 2D and 3D data. The compression method consists of a space-variant prediction followed by context- based classification ad arithmetic coding of the outcome residuals. Prediction of a pixel to be encoded is obtained from the fuzzy-switching of a set of linear regression predictors. The coefficients of each predictor are calculated so as to minimize prediction MSE for those pixels whose graylevel patterns, lying on a causal neighborhood of prefixed shape, are vectors belonging in a fuzzy sense to one cluster. In the 3D case, pixels both on the current slice and on previously encoded slices may be used. The size and shape of the causal neighborhood, as well as the number of predictors to be switched, may be chosen before running the algorithm and determine the trade-off between coding performance sand computational cost. The method exhibits impressive performances, for both 2D and 3D data, mainly thanks to the optimality of predictors, due to their skill in fitting data patterns.
Real-time integrated process supervision for autotuners and modified cerebellar model articulation controller
Abdul Wahab,
H. C. Quek,
B. H. Lim
Show abstract
This paper presents the use of a micro-controller-based Integrated Process Supervision as a tool for investigate work in expert control. Two different control theories integrated within process serve as examples of structured approach to expert control. The Integrated Process Supervision is a refinement of the Expert Control Architecture as proposed by Karl J. Astrom by allowing integration of several control techniques in a single generic framework. Specifically, the paper presents the result for experiments performed on an implementation of the Integrated Process Supervision on a PC and micro-controller environment. Autotuning techniques were first integrated within the process supervision. Three Autotuners based on specification of phase and amplitude margins were investigated. A modified version of Cerebellar MOdel Articulation Controller was then implemented in IPS as a direct controller. Results collected verify its integration in the integrated process supervision and also provide evidence of improved performance as compared to Autotuning.
Comparative evaluation of pattern recognition algorithms: statistical, neural, fuzzy, and neuro-fuzzy techniques
Show abstract
Pattern recognition by fuzzy, neural, and neuro-fuzzy approaches, has gained popularity partly because of intelligent decision processes involved in some of the above techniques, thus providing better classification and partly because of simplicity in computation required by these methods as opposed to traditional statistical approaches for complex data structures. However, the accuracy of pattern classification by various methods is often not considered. This paper considers the performance of major fuzzy, neural, and neuro-fuzzy pattern recognition algorithms and compares their performances with common statistical methods for the same data sets. For the specific data sets chosen namely the Iris data set, an the small Soybean data set, two neuro-fuzzy algorithms, AFLC and IAFC, outperform other well- known fuzzy, neural, and neuro-fuzzy algorithms in minimizing the classification error and equal the performance of the Bayesian classification. AFLC, and IAFC also demonstrate excellent learning vector quantization capability in generating optimal code books for coding and decoding of large color images at very low bit rates with exceptionally high visual fidelity.
Case for sensorless robots
Show abstract
Past three decades have witnessed significant research activities and accomplishments in the robotics field. These efforts have resulted in a whole array of useful system most notably for our factories and military. In this paper we discuss a new approach for developing useful robotic systems for much wider applications. The main idea is to simplify the robotic system by eliminating the sensory and control hardware from the robotic platform. Such 'sensor-less' system have significant advantages over the more traditional 'sensor-driven' systems. The paper discusses this issue and presents results of a roving robots that require no onboard sensing or control.
Applications III
Concept of linear run and its application to image compression
Sukhamay Kundu
Show abstract
We introduce here the motion of a linear-run in the sequence of gray values of an image. We present an optimal linear-time algorithm for decomposing an image into the smallest number of linear-runs that approximate the gray values within a given error limit (epsilon) > 0. For (epsilon) <EQ 6, which give good visual quality approximations, the linear-runs give significantly fewer runs that the traditional runs based on constant gray values in each run, and requires only about 1/3 of the memory needed for the traditional run-length method.We demonstrate the usefulness of this method on example images.
Extension of the generalized Hebbian algorithm for principal component extraction
Show abstract
Principal component analysis (PCA) plays an important role in various areas. In many applications it is necessary to adaptively compute the principal components of the input data. Over the past several years, there have been numerous neural network approaches to adaptively extract principal components for PCA. One of he most popular learning rules for training a single-layer linear network for principal component extraction is Sanger's generalized Hebbian algorithm (GHA). We have extended the GHA (EGHA) by including a positive-definite symmetric weighting matrix in the representation error-cost function that is used to derive the learning rule to train the network. The EGHA presents the opportunity to place different weighting factors on the principal component representation errors. Specifically, if prior knowledge is available pertaining to the variances of each term of the input vector, this statistical information can be incorporated into the weighting matrix. We have shown that by using a weighted representation error-cost function, where the weighting matrix is diagonal with the reciprocals of the standard deviations of the input on the diagonal, more accurate results can be obtained using the EGHA over the GHA.
Different approach to designing neural networks for similar handwritten Chinese character recognition
Show abstract
The input images of Chinese characters are normally preprocessed using different image processing techniques before the main classification in the handwritten Chinese character recognition. The authors proposed a different approach to the system philosophy of solving the handwritten Chinese character recognition problem where no preprocessing is necessary. The Chinese characters are treated as ideographs. The proposed system consists of a Rough Classifier which trigger the different Fine Classifiers. Each classifier is an optimized artificial neural network using genetic algorithms. A reduced system has been implemented. The result shows that the proposed system has higher recognition rate than the similar systems reported and is more efficiency.
Fuzzy correlation analysis with realization
Yue Y. Tang,
Xinrui Fan,
Ying N. Zheng
Show abstract
The fundamental concept of fuzzy correlation is briefly discussed. Based on the correlation coefficient of classic correlation, polarity correlation and fuzzy correlation, the relationship between the correlations are analyzed. A fuzzy correlation analysis has the merits of both rapidity and accuracy as some amplitude information of random signals has been utilized. It has broad prospects for application. The form of fuzzy correlative analyzer with NLX 112 fuzzy data correlator and single-chip microcomputer is introduced.
On-line measurement and accuracy analysis for parts using neural networks
Show abstract
In this paper, a new on-line measurement and accuracy analysis method for part configuration and surface is presented by combining computer vision and neural networks. Different from conventional contact measurement, it is non-contact measurement method, and it can operate on-line. In this method, the 3D configuration and surface of part are reconstructed from stereo image pair taken by computer vision system. The architecture for parallel implementation of part measurement system is developed using neural networks. Several relevant approaches including system calibration, stereo matching, and 3D reconstruction are constructed using neural networks. Instead of conventional system calibration method that needs complicated iteration calculation process, the new system calibration approach is presented using BP neural network. The 3D coordinates of part surface are obtained from 2D points on images by several BP neural networks. Based on the above architecture and the approaches, the part measurement and accuracy analysis system for intelligent manufacturing is developed by making fall use of the advantages of neural networks. The experiments and application research for this system is also presented in this paper. It is proved through the actual application that the method presented in this paper can meet the needs of on-line measurement for parts in intelligent manufacturing. It has important value especially for on-line measurement ofparts that have complicated surface.
Keywords: neural networks, on-line Measurement, computer vision, 3D reconstruction
Keywords: neural networks, on-line Measurement, computer vision, 3D reconstruction
Poster Session
Using evolutionary optimization for specialized recursive filter synthesis
Julia V. Sergienko,
Yuri S. Yurchenko
Show abstract
Designing digital filters that meet multiple quality criteria is a necessity for modern radar and navigation systems. The complexity of the problem increase due to the fact that no analog prototype can be used in such situation. Consequently, the direct synthesis methods are of great interest, evolutionary optimization among them. The use of evolutionary, optimization technique to synthesize both recursive and non-recursive filters for signal processing units of radar and navigation systems is discussed. The choice of proper variables for optimization routine is considered. To synthesize a filter from given multiple conflicting criteria, modified Niched Pareto genetic algorithm has been used. The algorithm produces tradeoff Pareto surface of non-dominated solutions. The recursive filter for navigation systems receiver with digital quandrature frequency selection has been synthesized from three criteria: maximum output signal-to-noise ratio, maximum suppression of adjacent channel, and minimum sensitivity of wavefront sampling moment to carrier frequency deviation. The results obtained are presented.
Fuzzy system for detecting microcalcifications in mammograms
Show abstract
We present a fuzzy classifier for detecting microcalcification sin digitized mammograms. The classifier post-processes the output form a wavelets-based multiscale correlation filter. Each local peak in the correlation filter output is represented by a set of five features describing the shape, size and definition of the peak. These features are used in linguistic rules by a fuzzy system that is trained to distinguish between microcalcification sand normal mammogram texture. In borderline cases where microcalcifications are buried in dense tissue or appear only faintly, simply drawing a straight threshold across the feature vector values will likely not produce the correct classification. the fuzzy system allows the effective 'threshold' to be drawn across ranges of features values depending upon how they interact with one another. Compared to wavelet processing alone, the fuzzy detection system produces a significant increase in true positive fraction when tested on a public domain mammogram database.
Applications I
Automatic alignment of a synchrotron radiation source beamline using intelligent systems
S. Olof Svensson,
Roberto Pugliese
Show abstract
Synchrotron Radiation (SR) sources in general, and the new third generation SR sources in particular, deliver very intense x-ray beams with very low divergence. However, due to small shifts of the sorted electron beam position caused by re-optimization of the closed orbit after shutdowns, the beamlines must be regularly re-aligned in order to deliver optimum performance. Since the beamlines generally contain complicated optical elements, such as x- ray mirrors and monochromators, the alignment procedure is difficult and time-consuming. Automatic beamline alignment has been envisaged in order to more constantly keep optimal performance of the beamline. An Intelligent System approach has been chosen to face the complexity of x-ray beamline alignment. A knowledge-based system has been chosen for the development of the automatic alignment tools. The developed tools have been applied to the multi-wavelength anomalous dispersion (MAD) beamline of the European Synchrotron Radiation (ESRF). The intensity and the spot shape at the sample position, obtained by using a small 2D CCD detector, were optimized by automatically aligning the main optical element, a bent cylindrical mirror that focuses the beam in both horizontal and vertical directions. The developed automatic techniques have been shown to robustly optimize the intensity and the focal spot shape on the ESRF MAD beamline. A series of images of the beam shape showing the optimization will be presented.
Poster Session
Method and evaluation test design for 2D/3D image segmentation
Fabrizio Giorgini,
Silvana G. Dellepiane,
Gianluca Incardona,
et al.
Show abstract
A method for semi-interactive 3D segmentation is described. It is an extension of the semi- interactive 2D segmentation based on the concept of fuzzy connectedness. It makes use of local and topological information at one time. In addition, the design of a protocol for 2D/3D segmentation result evaluation based on 3D synthetic images is proposed.