Bottom-Up Theories of the Reading Process
Bottom-up theories hypothesize that learning to read progresses from children learning the parts of language (letters) to understanding whole text (meaning). Much like solving a jigsaw puzzle, bottom-up models of the reading process say that the reading puzzle is solved by beginning with an examination of each piece of the puzzle and then putting pieces together to make a picture. Two bottom-up theories of the reading process remain popular even today: One Second of Reading by Gough (1972) and A Theory of Automatic Information Processing by LaBerge and Samuels (1974).
Gough’s (1972) One Second of Reading model described reading as a sequential or serial mental process. Readers, according to Gough, begin by translating the parts of written language (letters) into speech sounds, then piece the sounds together to form individual words, then piece the words together to arrive at an understanding of the author’s written message. In their reading model, LaBerge and Samuels (1974) describe a concept called automatic information processing or automaticity.
This popular model of the reading process hypothesizes that the human mind functions much like a computer and that visual input (letters and words) is sequentially entered into the mind of the reader. Almost without exception, humans have the ability to perform more than one task at a time (computer specialists sometimes call this “multitasking”). Because each computer (and by comparison the human mind) has a limited capacity available for multitasking, attention must be shifted from one job to another.
If one job requires a large portion of the available computer’s attention capacity, then capacity for another job is limited. The term “automaticity” implies that readers, like computers, have a limited ability to shift attention between the processes of decoding (sounding out words) and comprehending (thinking about the meaning of the author’s message in the text). If readers are too bogged down in decoding the text, they will not be able to focus on the job of comprehending the author’s message.
An example of automaticity in action can be seen in the common skill of learning to ride a bike. Novice bike riders focus so intently on balancing, turning the handlebars, and pedaling that they sometimes fail to attend to other important tasks like direction and potential dangers. Similarly, a reader who is a poor decoder focuses so much of his attention on phonics and other sounding out strategies that he has little brainpower left for comprehending. When this happens, the reading act, like an overloaded computer, “crashes.
In contrast, children who are accomplished bike riders can ride without hands, carry on a conversation with a friend, dodge a pothole in the road, and chew gum at the same time. Like the accomplished bike rider, fluent readers can rapidly focus on the author’s message because decoding no longer demands the lion’s share of their attention capacity. In summary, the LaBerge and Samuels (1974) model predicts that if reading can occur automatically, without too much focus on the decoding process, then improved comprehension will be the result. Also read the description of computer
Teachers who believe that bottom-up theories fully explain how children become readers often teach subskills first: they begin instruction by introducing letter names and letter sounds, progress to pronouncing whole words, then show students ways of connecting word meanings to comprehend texts. Although bottom-up theories of the reading process explain the decoding part of the reading process rather well, there is certainly more to reading than decoding.
To become readers, students must compare their knowledge and background experiences to the text in order to understand the author’s message. Truly, the whole purpose of reading is comprehension. Bottom-up theories of reading view reading as an essentially passive process, wherein the reader decodes the intended message of the writer by moving from the lowest level, such as letters and words, towards the higher levels of clauses, sentences, and paragraphs. One bottom-up model, that of Gough (1972), is described by Urquhart and Weir.
In Gough’s model a number of processing components are used in order to process text. His metaphorical description describes the reading process from the perception of the letters which make up the text, through to an oral realization of the text. The reader begins with letters, which are detected by the scanner. The strings of letters are then converted into phonemes by the decoder. The output of the decoder then arrives at the librarian where it is recognized as a word.
The reader then continues by fixating on the next word in the text until every word in the sentence has been analyzed. Merlin is then utilized to apply syntactic and semantic rules in order to determine the meaning of the sentence. The final stage of the model is the Vocal System, where an oral realization of the sentence is produced by the reader. The serial nature of Gough’s model is criticized by Rayner and Pollatsek (1989) as it predicts that words should take longer to recognize than letters, whereas experimental evidence has shown that this is not the case.
In addition, it has been shown that readers utilize ‘high level’ syntactic information in order to recognize words. This would not be predicted by Gough’s original unidirectional, bottom-up approach to reading. The bottom-up or decoding model of reading was also criticized by Eskey (1973) for its failure to account for the contribution of the reader, whose expectations about the text, which are informed by his knowledge of language, are employed as part of the reading process (Carrell, 1988a).
Samuels and Kamil (1988) cite the work of Stanovich (1980), whose criticism of the bottom-up approach is based partly on the lack of feedback loops that would enable processing stages occurring later in the reading process to inform those occurring earlier. As a result, from a bottom-up standpoint “it was difficult to account for sentence-context effects and the role of prior knowledge of text topic as facilitating variables in word recognition and comprehension” Samuels and Kamil (1988: 31). The perceived importance of the reader’s expectations in the processing of text led to the development of top-down approaches to reading theory.