Quarterly (winter, spring, summer, fall)
224 pp. per issue
6 3/4 x 9 1/4
ISSN
0024-3892
E-ISSN
1530-9150
2014 Impact factor:
1.71

Linguistic Inquiry

Summer 2017, Vol. 48, No. 3, Pages 349-388
(doi: 10.1162/ling_a_00247)
© 2017 by the Massachusetts Institute of Technology
Efficient Evaluation and Learning in Multilevel Parallel Constraint Grammars
Article PDF (4.26 MB)
Abstract

In multilevel parallel Optimality Theory grammars, the number of candidates (possible paths from the input to the output level) increases exponentially with the number of levels of representation. The problem with this is that with the customary strategy of listing all candidates in a tableau, the computation time for evaluation (i.e., choosing the winning candidate) and learning (i.e., reranking the constraints on the basis of language data) increases exponentially with the number of levels as well. This article proposes instead to collect the candidates in a graph in which the number of nodes and the number of connections increase only linearly with the number of levels of representation. As a result, there exist procedures for evaluation and learning that increase only linearly with the number of levels. These efficient procedures help to make multilevel parallel constraint grammars more feasible as models of human language processing. We illustrate visualization, evaluation, and learning with a toy grammar for a traditional case that has already previously been analyzed in terms of parallel evaluation, namely, French liaison.