288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

December 1, 2000, Vol. 12, No. 12, Pages 2881-2907
(doi: 10.1162/089976600300014764)
© 2000 Massachusetts Institute of Technology
Asymptotic Convergence Rate of the EM Algorithm for Gaussian Mixtures
Article PDF (141.41 KB)

It is well known that the convergence rate of the expectation-maximization (EM) algorithm can be faster than those of convention first-order iterative algorithms when the overlap in the given mixture is small. But this argument has not been mathematically proved yet. This article studies this problem asymptotically in the setting of gaussian mixtures under the theoretical framework of Xu and Jordan (1996). It has been proved that the asymptotic convergence rate of the EM algorithm for gaussian mixtures locally around the true solution Θ* is o(e0.5−ε(Θ*)), where ε > 0 is an arbitrarily small number, o(x) means that it is a higher-order infinitesimal as x → 0, and e(Θ*) is a measure of the average overlap of gaussians in the mixture. In other words, the large sample local convergence rate for the EM algorithm tends to be asymptotically superlinear when e(Θ*) tends to zero.