Share Email Print

Proceedings Paper

Designing adaptive neural network architectures and their learning parameters using genetic algorithms
Author(s): Hiroki Takahashi; Takeshi Agui; Hiroshi Nagahashi
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

This report describes a GA (Genetic Algorithms) method that evolves multi-layered feedforward neural network architectures for specific mappings. The network is represented as a genotype that has six kinds of genes. They are a learning rate, a slant of sigmoid function, a coefficient of momentum term, an initializing weights range, the number of layers and the unit numbers of each layer. Genetic operators affect populations of these genotypes to produce adaptive networks with higher fitness values. We define three kinds of fitness functions that evaluate networks generated by the GA method. Their fitnesses are assessed for the generated network trained with BP (Back Propagation) algorithm by several network performances. In our experiments, we train the networks for the XOR mapping. They are designed systematically and easily using the GA method. These generated networks require fewer training cycles then networks used until now, and a rate of convergence is improved.

Paper Details

Date Published: 19 August 1993
PDF: 8 pages
Proc. SPIE 1966, Science of Artificial Neural Networks II, (19 August 1993); doi: 10.1117/12.152652
Show Author Affiliations
Hiroki Takahashi, Tokyo Institute of Technology (Japan)
Takeshi Agui, Tokyo Institute of Technology (Japan)
Hiroshi Nagahashi, Tokyo Institute of Technology (Japan)

Published in SPIE Proceedings Vol. 1966:
Science of Artificial Neural Networks II
Dennis W. Ruck, Editor(s)

© SPIE. Terms of Use
Back to Top