288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

July 1, 2002, Vol. 14, No. 7, Pages 1507-1544
(doi: 10.1162/08997660260028593)
© 2002 Massachusetts Institute of Technology
A Monte Carlo EM Approach for Partially Observable Diffusion Processes: Theory and Applications to Neural Networks
Article PDF (1.14 MB)

We present a Monte Carlo approach for training partially observable diffusion processes. We apply the approach to diffusion networks, a stochastic version of continuous recurrent neural networks. The approach is aimed at learning probability distributions of continuous paths, not just expected values. Interestingly, the relevant activation statistics used by the learning rule presented here are inner products in the Hilbert space of square integrable functions. These inner products can be computed using Hebbian operations and do not require backpropagation of error signals. Moreover, standard kernel methods could potentially be applied to compute such inner products. We propose that the main reason that recurrent neural networks have not worked well in engineering applications (e.g., speech recognition) is that they implicitly rely on a very simplistic likelihood model. The diffusion network approach proposed here is much richer and may open new avenues for applications of recurrent neural networks. We present some analysis and simulations to support this view. Very encouraging results were obtained on a visual speech recognition task in which neural networks outperformed hidden Markov models.