Monthly
208 pp. per issue
8 1/2 x 11, illustrated
ISSN
0898-929X
E-ISSN
1530-8898
2014 Impact factor:
4.69

Journal of Cognitive Neuroscience

February 2015, Vol. 27, No. 2, Pages 246-265
(doi: 10.1162/jocn_a_00699)
© 2014 Massachusetts Institute of Technology Published under a Creative Commons Attribution 3.0 Unported (CC BY 3.0) license
Real-time Functional Architecture of Visual Word Recognition
Article PDF (1.22 MB)
Abstract

Despite a century of research into visual word recognition, basic questions remain unresolved about the functional architecture of the process that maps visual inputs from orthographic analysis onto lexical form and meaning and about the units of analysis in terms of which these processes are conducted. Here we use magnetoencephalography, supported by a masked priming behavioral study, to address these questions using contrasting sets of simple (walk), complex (swimmer), and pseudo-complex (corner) forms. Early analyses of orthographic structure, detectable in bilateral posterior temporal regions within a 150–230 msec time frame, are shown to segment the visual input into linguistic substrings (words and morphemes) that trigger lexical access in left middle temporal locations from 300 msec. These are primarily feedforward processes and are not initially constrained by lexical-level variables. Lexical constraints become significant from 390 msec, in both simple and complex words, with increased processing of pseudowords and pseudo-complex forms. These results, consistent with morpho-orthographic models based on masked priming data, map out the real-time functional architecture of visual word recognition, establishing basic feedforward processing relationships between orthographic form, morphological structure, and lexical meaning.