It is now well known that neural classifiers can learn to compute a posteriori probabilities of classes in input space. This note offers a shorter proof than the traditional ones. Only one class has to be considered and straightforward minimization of the error function provides the main result. The method can be extended to any kind of differentiable error function. We also present a simple visual proof of the same theorem, which stresses the fact that the network must be perfectly trained and have enough plasticity.