288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

January 1994, Vol. 6, No. 1, Pages 161-180
(doi: 10.1162/neco.1994.6.1.161)
© 1994 Massachusetts Institute of Technology
Polyhedral Combinatorics and Neural Networks
Article PDF (1.04 MB)

The often disappointing performance of optimizing neural networks can be partly attributed to the rather ad hoc manner in which problems are mapped onto them for solution. In this paper a rigorous mapping is described for quadratic 0-1 programming problems with linear equality and inequality constraints, this being the most general class of problem such networks can solve. The problem's constraints define a polyhedron P containing all the valid solution points, and the mapping guarantees strict confinement of the network's state vector to P. However, forcing convergence to a 0-1 point within P is shown to be generally intractable, rendering the Hopfield and similar models inapplicable to the vast majority of problems. A modification of the tabu learning technique is presented as a more coherent approach to general problem solving with neural networks. When tested on a collection of knapsack problems, the modified dynamics produced some very encouraging results.