288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

August 2007, Vol. 19, No. 8, Pages 2245-2279
(doi: 10.1162/neco.2007.19.8.2245)
© 2007 Massachusetts Institute of Technology
Reinforcement Learning, Spike-Time-Dependent Plasticity, and the BCM Rule
Article PDF (1.41 MB)

Learning agents, whether natural or artificial, must update their internal parameters in order to improve their behavior over time. In reinforcement learning, this plasticity is influenced by an environmental signal, termed a reward, that directs the changes in appropriate directions. We apply a recently introduced policy learning algorithm from machine learning to networks of spiking neurons and derive a spike-time-dependent plasticity rule that ensures convergence to a local optimum of the expected average reward. The approach is applicable to a broad class of neuronal models, including the Hodgkin-Huxley model. We demonstrate the effectiveness of the derived rule in several toy problems. Finally, through statistical analysis, we show that the synaptic plasticity rule established is closely related to the widely used BCM rule, for which good biological evidence exists.