288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

March 1993, Vol. 5, No. 2, Pages 165-199
(doi: 10.1162/neco.1993.5.2.165)
© 1993 Massachusetts Institute of Technology
Neural Networks and Nonlinear Adaptive Filtering: Unifying Concepts and New Algorithms
Article PDF (1.38 MB)

The paper proposes a general framework that encompasses the training of neural networks and the adaptation of filters. We show that neural networks can be considered as general nonlinear filters that can be trained adaptively, that is, that can undergo continual training with a possibly infinite number of time-ordered examples. We introduce the canonical form of a neural network. This canonical form permits a unified presentation of network architectures and of gradient-based training algorithms for both feedforward networks (transversal filters) and feedback networks (recursive filters). We show that several algorithms used classically in linear adaptive filtering, and some algorithms suggested by other authors for training neural networks, are special cases in a general classification of training algorithms for feedback networks.