Share Email Print

Proceedings Paper

Non-isotonous beta-driven artificial neuron
Author(s): Victor I. Varshavsky; Vyacheslav B. Marakhovsky
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

In this paper we discuss variants of digital-analog CMOS implementation of artificial neuron taught to logical threshold functions. The implementation is based on earlier suggested (beta) -comparator and three output amplifiers. Such a circuit can be taught only to threshold functions with positive weights of variables, which belong to the class of isotonous Boolean functions. However, most problems solved by artificial neural networks either require inhibitory inputs. If the input type is known beforehand, the problem of inverting the weight sign is solved trivially by inverting the respective variable. Otherwise, the neural should have synapses capable of forming the weight and type of the input during the learning, using only increment and decrement signals. A neuron with such synapses can learn an arbitrary threshold function of a certain number of variables. Synapse circuits are suggested with two or one memory element for storing positive and negative input weights. The results of SPICe simulation prove that the problem of teaching non-isotonous threshold functions to a neuron has stable solutions.

Paper Details

Date Published: 30 March 2000
PDF: 8 pages
Proc. SPIE 4055, Applications and Science of Computational Intelligence III, (30 March 2000); doi: 10.1117/12.380577
Show Author Affiliations
Victor I. Varshavsky, Neural Networks Technologies Ltd. (Japan)
Vyacheslav B. Marakhovsky, Univ. of Aizu (Japan)

Published in SPIE Proceedings Vol. 4055:
Applications and Science of Computational Intelligence III
Kevin L. Priddy; Paul E. Keller; David B. Fogel, Editor(s)

© SPIE. Terms of Use
Back to Top