|
Comparing Theories of Expertise
The true test of any theory is the extent to which it can accounts for the phenomena within the field it is attempting to explain. The fit of the four theories (chunking, SEEK, LT-WM, and template) are analyzed in this section using data from experiments conducted on different phenomena seen in expert processing. How well each theory can account for or explain each phenomenon will be broken down according to a review article written by F. Gobet (1998) in trying to elucidate the current standing of the field of expertise. The experiments fell under six primary themes (early perception, STM capacity and LTM encoding, modality of representation, LTM organization, and learning). The table below summarizes the extent to which the different tasks are accounted for by each theory. However, rebuttals have been made to the extent to which Gobet is completely accurate, but with like any review article, the following information should be taken with a grain of salt. Opposing viewpoints will try to be listed following a brief explanation of each theme.
While any general theory of expertise cannot be domain specific (although the need for generalizability across domain is debatable), in order to see the difference in explanatory power most clearly, focusing on one domain (in this case chess) is more sensical. Chess has been long studied and extensively modeled, thus making it ripe for theory application.
Let’s begin!
Emperical Domains:
Early Perception | STM recall and LTM encoding | Modality of Rep.| LTM Organization | Learning
< Table from Gobet (1998) >
Early Perception:
The four theories can be distinguished by the level on which the theory acts. Evidence in early perception would advocate for “chunking” and “template” theory which hones in on lower level, perceptual processes. SEEK (search, evaluation, knowledge) and LT-WM (long-term working memory) have limited perceptual mechanisms. SEEK acts on high-level knowledge, centering expertise on search efficiency and speed rather than just perceiving an object or stimulus.
- Eye Movements: how eyes fixate or move during a task can reveal the nature of encoding or even simply, if there is a difference between novices and experts. De Groot and Gobet (1996) found that masters in chess fixate for a shorter amount of time, with less variance, and cover more of the board, specifically looking at important squares.
- Short Presentations: tasks have been in which subjects are asked shown chess positions (chess pieces arranged on a board) for very briefs amount of time and then asked to recall. Ellis (1973) found that for stimulus of even 150 ms, there is a relationship between skill level and memory. SEEK theory cannot apply since with presentations of 1s, for example, differential encoding (based on levels of processing cannot occur). Furthermore, no explanation is truly put-forth on processing visual stimuli (as will be discussed in the modality section). LT-WM retrieval structures (if encoding chunks) may work although are still not fast enough.
Back to the chart
STM capacity and LTM encoding:
The theories can be further teased apart by looking at which memory structure it relies on and how well those structures will act. One of the greatest weaknesses of chunking theory is how it approaches STM (short-term memory) and LTM (long-term memory). Simon and Chase proposed that chunking relied on short-term mechanisms (since 5s presentations were too short for LTM encoding) but such an assumption would mean that information in STM (thus chess chunks) should not be resistant to tasks that affect STM (such as interference). The nature and limitations observed in STM and LTM can be elucidated by expertise and how well the theories account for observed phenomena is another testament to its credibility.
- Interference: pursued mainly by Charness (1976), interference is one major obstacle that must be explained by any theory seeking validity. The task was simple. A chess position (or a sequence of chess positions) (Frey & Adesman, 1976) was shown and the subject (categorized by skill level) had to either recall immediately (control), or after a 30 delay (which could either be a blank or an intervening task meant to subsume STM store and thus limit chunking). A latency was found for placing the first piece but more importantly, there was only a 6-8% degradation in information which was small compared to traditional trigram consonants which showed greater loss following an interpolating task. SEEK and template theory get around limitation of rapid encoding without massive chunks by assuming either one prototype or one template per position, respectively. While all other theories besides chunking shows rapid encoding into LTM (which is the only way to explain durability of memory in light of interference), LT-WM is slighted in that there is only one retrieval structure which makes multiple chess positions harder to manage.
- Random Positions: Chase and Simon’s (1973) crucial finding that experts and novices don’t differ upon random positions is controversial, even today. Gobet contends that there are to some extent some skill advantage and predicts that a large database can ensure the right chunks are being chosen, even if just on serendipity. For SEEK, however, if there is a superiority, it cannot explain it since there are only prototypes and prototypes do not exist for random objects.
- Number of Pieces: Experts in the field have noticed that pieces for end game positions (those chess arrangements near the end of a game where there are few pieces left) are the worse to remember, even for experts. This is accounted for in SEEK and template theory, given that since the game tree for chess expands exponentially, representations (templates or prototypes) for end game positions are not likely to have formed. LT-WM would have assumed rapid encoding but it doesn’t those posing a challenge to this theory.
- Recall of games: this task refers to the recall of sequential moves either through auditory or visual instructions. One way of using this particular paradigm is through blindfold chess (where the game is played mentally). Saariluoma (1991) dictated moves from either a real, played game, a randomly generated game with all legal moves and pieces, and finally a randomly created chess position. Saariluoma found masters were better at recalling previously played games, but worse for random games with possible, illegal moves. While template theory accounts for all three conditions (because superiority on previously played games and random-but-legal games is explained through chunks with more cue associations), neither SEEK nor LT-WM can explain the failure in recall for random-but-illegal condition.
Back to the chart
Modality of Representation
How exactly are the stimuli represented in each of the theorem? Because theory builds so heavily on chunking theory, it comes as no surprise the two share the same type of representation (Gobet, 1998) which is mainly visuospatial with perceptual input encode in multiple ways. LT-WM is similar although stresses a more spatial form of representation. Ericsson and Kintsch elaborate even further, suggesting that the information which is initially verbal, is recoded to visuospatial through the storage capacity of the retrieval structure (e.g mnemonics). SEEK theory goes even further, coding information primarily in an abstract, verbal representation. Evidence, however, seems to point towards a dominance of visual encoding. Charness (1974) tested performance based on representation by having chess players either read a description of a chess position, listen to it auditorily, or shown it visually. Visually presented positions were better recalled.
Back to the chart
LTM organization
Is there data to suggest that the main features of each theory even exist and how well does this evidence stand up to scrutiny?
- Direct Evidence For Chunks: When having chess experts reconstruct chess positions, Chase and Simon (1973) noticed that certain pieces or groups of pieces seemed to be placed on the board followed by a certain delay or latency prior to the next group of pieces being placed. They are argued that these latencies are due to “chunking” of pieces and the accessing of different chunks led to pauses. Partitioning paradigms (where subjects are asked to divide the board into what they saw as meaningful units) suggests that indeed there are chunks since certain patterns constantly arose (Freyhoff et al., 1992).
- Number of Chunks in LTM: While the number that most “chunking” researchers would give you as an estimate is approximately 50,000 (which happens to also be the approximate number of words a person needs to know to be considered fluent in a language), there is still some debate over how “chunks” should even be conceived. Holding (1992) argues that if coded generically (without color or location), the number of chunks necessary in LTM for expertise could be as little as 2,500. In order to refute this assumption, however, Gobet and Simon (1996b) tested subjects on a distorted/modified chess positions (where the board was mirror-reversed) and found some decrease in recall. Holding would argue, however, that the problem deals with the fact that positions were changed as well, making it hard to reach prototypes needed for expert processing in chess.
- Direct Evidence for Conceptual Knowledge: While all theories maintain the importance of a knowledge base, SEEK caters specifically to these findings. Lane and Robertson (1979) saw, for example, that recall was better for players who were asked to judge position and try and find the next best move (deep processing) rather than just counting the number of pieces on the board (structural, superficial task). Furthermore, Gruber (1991) showed players tended to ask information that showed high-level descriptors were significant, such as how a certain position arises and what the next two or three moves might be.
- Direct Evidence for Retrieval Structures: the main evidence comes from the advocate of LT-WM, Ericsson (and Staszewski, 1989) who showed chess players two chess positions sequentially and had them build up an internal representation of each board to be probed later by questions. They found that when they probed the same board, the time it took for the chess expert to answer decreased relative to probing randomly or alternating between the two chessboards, thus arguing for a one retrieval system model.
Back to the chart
Learning
Does the evidence in expertise literature for each of the theorems speak to possible ways that expertise can be acquired or learned? After all, a viable theorem would need to provide more than just an explanation of expert processing phenomena but must also have predictive power in terms of getting a novice to the point of automation and understanding to be an expert.
- Short-range learning: Most laboratory experiments conducted (at least in the domain of chess) has been of quick duration (tens of seconds). Rapid learning must have some way of encoding stimuli that is presented briefly. While SEEK theory does not say much to short-range learning and LT-WM is without time parameters (thus making it hard to qualify), template theory and chunking are both accounted for as shown by a possible mechanism suggest by Chase and Simon (1973). They hypothesize that there is dual encoding in which familiar pieces are processed simultaneously with less familiar pieces, but that after a few seconds, attention is shifted directly to less familiar chunks or isolated pieces that can then be learnt.
- Long-range learning: Very few longitudinal study of chess expertise has been preformed thus making it heart for long-range learning to have any real discriminatory efficacy in differentiating between theories. However, Charness (1989) did revisit a chess expertise that he had tested prior and discovered that improvements in chess playing came mainly as a function of changed chunks (which had expanded over time). The search field remain roughly the same (contrary to what SEEK would predict).
Back to the chart
<Top of page>
|