If information processing theory attempts to articulate the unobservable (Tracey and Morrow, 2012), then it is relevant to explore, with the development of scanning technology, what is observable in the brain when reading occurs. This is particularly relevant if we reject the concept that reading is a natural and inherent process that can be discovered in much the same manner that humans develop language fluency.
The first indications that the brain has a specific, specialised, cortical visual centre for letter and word reading emerged from Déjerine’s (1887) discovery that his patient (Mr C.), who could converse articulately, recognise people and objects, read digits and even write, had suddenly become incapable of reading despite having been fluent previously. Mr C. had no visual impairment. On Mr C’s death, a post-mortem revealed a lesion in the left visual cortex leading Déjerine to conclude that it was the disabling of this part of the brain that had created the word blindness (Alexia) and was thus the visual centre for letters and reading.
This was confirmed one hundred years later with the use of positron emission tomography (PET) scanning (Petersen et al., 1988) which revealed that when reading, the small area of the brain identified by Déjerine (1887) was indeed activated prior to activation of the speech and language functions of the brain. This area was not activated by speech alone. Magnetic resonance imaging (MRI) on alexia-afflicted patients (Cohen et al., 2003) was able to pinpoint the area of this brain that recognises letters as being a few centimetres to the front of the occipital lobe on the bottom side of the left hemisphere now known as the left occipito-temporal area or letterbox.
What Déjerine (1887) proposed was a simple, linear, serial processing chain for reading. Written words enter the occipital pole in the form of visual images which then have auditory applications applied. These auditory images are then articulated in the Broca’s area for language deciphering and then propagated to the motor cortex for recoding and aural communication.
Déjerine’s (1887) rather neat serial process was undermined by the MRI research (Cohen et al., 2003) which revealed a far more integrated process whereby the visual forms of letter strings are identified in the left occipitotemporal letterbox which spreads the information over the left hemisphere for decoding, encoding word meaning, sound pattern and articulation. These areas are primarily language processing areas that have evolved for oral communication. Thus, learning to read has resulted from the development and refinement of an efficient interconnection between visual areas and language areas with all connections being bidirectional (Dehaene, 2010).
Functional magnetic resonance imaging (fMRI) triangulated with post-mortem analysis has enabled even more accurate identification and analysis of this letterbox area (Cohen et al., 2000). This has established that the area of the brain activated for visual identification of letters is identical for all individuals and is universal no matter how the individual was taught to read or the language in which the words they are reading have been encoded. Further fMRI analysis found that there is almost no difference in brain activity when reading between individuals.
This is crucial for the teaching of reading. Reading acquisition appears to be a highly constrained process that systematically channels information to the same areas of the brain (Puce et al.1996). The suggestion therefore, that there exist a variety of differing cognitive personal pedagogical archetypes that satisfy the requirement and maintenance of a number of differing approaches to reading instruction is fallacious. There is only one method that is supported by the fMRI scanning evidence: phonic decoding, practiced to automaticity and allied to vocabulary expansion that triggers the word superiority effect (Reicher, 1969). Whole language and balanced literacy methods like Reading Recovery may achieve the word superiority effect (Reicher, 1969), but they do so inadvertently, fortuitously and inefficiently and will often fail, especially with the most inefficient and vulnerable readers.
Perhaps even more revealing through fMRI scanning is the issue of invariance in word recognition related to changes in letter font and case. When words were read in upper and lower case, fMRI revealed almost no difference in brain activity. More revealing, however, was that there was a similar universality of stimulus when words were presented in mixed and random upper and lower cases suggesting that in competent readers neurons take no notice of case and certainly are not stimulated by word shape (Polk and Farah, 2002). Furthermore, there was a similar stability of activity and seemingly no confusion when anagrams were presented consecutively and also when words with the same root were presented consecutively. The implication is twofold: firstly, that we do not recognise words by their shape and secondly, that letters do not float independently but that strings of letters are unconsciously bound together. Reicher’s (1969) word superiority effect is thus supported by the fMRI scanning outcomes.
Crucially for reading instruction, fMRI scanning has enabled accurate word recognition speed measurement. That the entire visual word recognition process, from retinal processing to the highest level of invariance unfolds automatically in less than one fifth of a second without any unconscious examination (Polk and Farah, 2002) fundamentally undermines the possibility of reading being a psycholinguistic guessing game (Goodman, 1967). Constant contextual and semantic adjustments and revisions of word guesses to select appropriate words accurately are just not possible within that timeframe for fluency to be viable.
Follow the reading ape on twitter - @thereadingape