Theories of memory
Memory, “the retention of learning and experience” is now more often studied from an information-processing point of view, and is generally accepted as representing the processes of encoding, storage and retrieval of information (Gross, 2001). Many theories of memory have come to light since the first studies of memory by Ebbinghaus (1885: cited by Gross 2001), and each theory has those opposing. Atkinson & Shiffrin’s multi-store model remains a highly influential basis to other memory models.
The multi-store model places an emphasis on short-term memory (STM) and long-term memory (LTM) as it views each as permanent “structural components” of the memory system, and tries to describe how information flows from one storage system to another. The model shows how raw sensory information is temporarily stored in our sensory memory whilst it is scanned and matched with information from our long-term memory. This is how we deem information sufficiently important to draw our attention. Matched information might then progress to STM, and with the operation of certain control processes such as rehearsal, may have permanence in LTM.
Many experimental studies of STM and LTM have been consistent with the multi-store model. (Atkinson & Shiffrin 1968, 1971: cited by Gross 2001) The “serial position effect” (SPE) presented by Murdock was the result of an experimental study where participants were asked to memorise a series of words. It was found that items at the start and end of the series seemed to be recalled with more frequency because of what Murdock called the primacy and recency effects, and that intermediary items seemed to be recalled less efficiently – subsequently called asymptote. Murdock 1962: cited by Cech 2001 and Gross 2001), Multi-store theorists would argue that SPE works under the assumption that two result sets are the consequence of the operation of two different memory systems. It would be argued that primacy works on the presumption that items at the start of the list have been rehearsed and have consequently been transferred to and retrieved from LTM. Recency would presumably occur because items are still current within STM and are recalled from there.
Asymptote would occur because not enough time is available for rehearsal and storage in LTM and because STM only has a limited capacity of about 7 items (Miller, 1956: cited by Gross 2001). One of Murdock’s (1962) further experiments used dissociation logic to investigate the correlation of LTM to primacy and asymptote in SPE. The experiment was carried out with two groups. Both groups were presented with the same word list. One group would hear the words at a rather fast pace of about one word per second, while the other would be allowed enough time for steady rehearsal.
Consequently, it was found that a faster rate lowered primacy and asymptote, but did not affect recency. As a result, Murdock seemed successful in demonstrating that the primacy and asymptote aspects of SPE are from LTM. (Cech, 2001; Anderson, 1998) However since these early studies, many have censured the multi-store model for its overt simplicity, linearity or focus on certain control processes – explicitly its reliance on maintenance rehearsal – how much rehearsal occurs, as opposed to elaborative rehearsal – thought processes on numerous intellectual levels (Craik & Watkins, 1973: cited by Gross 2001).
In line with this criticism, Jenkins (1974: cited by Gross 2001) found that participants in a study were able to recall unanticipated particulars without rehearsal. Incidental learning, as it were, therefore shows that maintenance rehearsal is not necessary for storage. As discussed earlier, the multi-store model would represent STM and LTM as being individual structural components, while control processes such as rehearsal are less permanent. However, Craik & Lockhart (1972: cited by Gross 2001) presented the contrasting concept that STM and LTM are derivatives from the operation of those control processes.
Craik & Lockhart’s levels-of-processing (LOP) model suggests that, “Memory is a by-product of perceptual analysis. This is controlled by the central processor, which can analyse a stimulus on various levels. ” (Gross, 2001 p255) LOP contrasts the depth at which stimuli is acted upon – at a superficial or shallow level where analysis is physical or sensory (lines, curves, angles, colour, etc); at a phonetic, phonemic or intermediate level (an acoustic interpretation); or at a more semantic or deep level (an associative, meaningful or intellectual interpretation).
The model suggests that the deeper we process stimuli, the more likely we are able to retain that information. It was also shown by Craik & Tulving (1975: cited by Gross 2001 and Kivatisky 2001), that the elaboration and distinctiveness of stimuli aids remembering. However, as with the multi-store model, LOP has its limitations. The model fails to explain why deeper processing leads to improved memory, and serves better to provide a descriptive representation of memory (Eysenck & Keane, 1995: cited by Gross 2001).
LOP also fails to show or indicate a universal method to independently measure the depth of processing. This “places major limits on the power of the levels-of-processing approach” (Baddeley 1990: cited by Gross 2001). LOP is just one of the many alternatives presented for the multi-store model. Soon after the multi-store model was staged, theorists began to make many further developments to its basic architecture, highlighting key areas for concern and developing new concepts from subsequent experimental studies.
In light of these findings, Baddeley & Hitch (1974: cited by Gross R, 2001) proposed to create a more complex module that might substitute the simplistic and somewhat eroded STM structure in multi-store architecture – the working memory (WM) model. Working memory, as the name suggests, represents memory as an active processing system. The model demonstrates how the “central executive” manages control processes such as decision-making, logic and delegation of attention.
More originally, the model shows separate processing modules: which it calls the “articulatory loop” and the “visuospatial scratchpad”. The articulatory loop is an attempt to explain much of the evidence for acoustic coding in STM. The visuospatial scratchpad is named so because it is said to control transactions of visual/spatial code. (Baddeley et al, 1975: cited by Gross, 2001) WM seems to supersede other models prior to its arrival, but there is no doubt that models such as the multi-store aided and founded the base for Baddeley’s (et al. research. However, as more experimental studies are implemented, even WM is seemingly eroding [See Glenburg’s (1997) opposing theories to WM]. Undisputedly, there will always be room for conceptual opposition to models of memory – and their attempts to explain and describe cognitive functions – as theorists can only hypothesise on empirical studies. However, as shown in this study, also true is that with enough continuous lateral processes, we come increasingly closer to factual logic.
Get help with your homework
We'll occasionally send you account related and promo emails