What is the difference between operation span and memory span
To sumarize, Figure 1 bottom shows two differences between chunkable and non-chunkable. Top A sample of stimuli based on eight shapes, eight colors and two types of sizes. Bottom Table showing a sample of trials. The first half of the table shows the sequences of the non-chunkable condition. The second half of the table shows the sequences of the chunkable condition. For each sequence length, we chose to represent only two trials. Dimension values were chosen randomly for each trial.
For instance, the given category structure i. The preceding example only involve the dimension values square, triangle white and black, but again, the dimension values were actually randomly picked among the 8 shapes and the 8 colors shown in the top panel, and using two sizes randomly.
Even if performance sometimes results from graded associations between items rather than discrete chunks, the chunk vocabulary conveniently expresses the amount of increase in performance with more compressible lists.
Accordingly, four conditions were constructed: a simple span task using chunkable material, a complex span task using chunkable material, a simple span task using non-chunkable material, and a complex span task using non-chunkable material. We predicted that the simple span task could only have a beneficial effect on recall when some of the information could be re-encoded, while such a benefit could not occur when no information or little information could be re-encoded.
Conversely, a complex span task offers no opportunity to recode the regular patterns in the chunkable condition because attention is directed away during the interleaved processing task. Therefore, we predicted an interaction between task and compressibility, supporting only a higher span for the simple span task in the chunkable condition.
To test the size of the interaction, we planned on running a Bayesian analysis to compare the quantity of material chunked in the four conditions, and particularly using a chunking score reflecting the quantity of materials chunked in the simple span task and the complex span task.
A strong interaction should be supported by a smaller chunking score for the complex task. Estimate of sample size was computed based on the difference observed in our previous study for proportion correct between the most chunkable condition and the least chunkable condition.
We used only two values per dimension within each trial Figure 1 , bottom. For each trial, a random combination of two shapes among eight different ones , two colors among eight different ones , and two sizes made a set of eight possible objects. We restricted the size dimension to two different values large vs.
The use of eight shapes, eight colors, and two sizes was sufficient to generate possible sets of eight objects, which limited proactive interference between trials a sampled combination of features is given in Figure 1 , top. The participant did not know in advance which of the dimensions would be the most relevant to the categorization process. Dimension values were chosen randomly for each of the lists presented, so as to vary the possible combinations of dimensions shapes, sizes, and colors across lists, while preserving the same category structure shown in Figure 1.
The probability that a participant would come across two identical sets of features between two lists during the experiment was assumed to be very low. Each participant attempted all four blocks chunkable simple span task, non-chunkable simple span task, chunkable complex span task, non-chunkable complex span tasks , the order of which was counterbalanced across participants i. Each block comprised several lists of stimuli and recall occurred after each list.
The participants were informed that they were required to memorize, in correct order, each list of stimuli. A list of stimuli e. The stimuli in a given sequence were displayed serially in the center of the screen for one second each e. Difficulty of each sequence was estimated following the compressibility metric described by Chekaf et al. This metric simply makes use of disjunctive normal formulas a disjunctive list of conjunction of features to compute the minimal number of features that reduce the uncompressed lists of objects which list verbatim all of the features of the constituent objects within lists.
After the list of items was presented, the response screen showed the whole set of eight objects from which the subset had been selected. The response screen showed in randomly-determined positions eight response choices: the k to-be-recalled stimuli and the 8 — k remaining distractor objects.
Participants were required to recall the list of items and to reconstruct their order. The participant made selections by clicking on the objects to recall the items in the correct order. The stimuli were underlined using a white bar when the user clicked on them. There was no timing constraint for recall. The participant could move on to the next sequence by pressing on the space bar. The 8 — k remaining distractor objects in the test screen allowed us to compute the compressibility properly.
For instance, for Trial 14nc shown in Figure 1 , the recall screen included a large green triangle, a small purple triangle, a small green circle, and a large purple circle as the new items, in addition to the four stimuli large purple triangle, small green triangle, a small purple circle, and a large green circle.
Trial 14c shown in Figure 1 included the four red objects in addition to the four blue stimuli. The compressibility of the memoranda was therefore intentionally correlated with retrieval demands of the trials.
Following the previous example, the new items of trial 14nc are logically more interferent with the memoranda because the features of the lures overlap with those of the to-be-recalled stimuli. Conversely, the red lures could be less confounded with the blue stimulus objects in 14c. The fact that every description and its complement have the same complexity is generally referred to as parity.
The lists were displayed using ascending presentation of length length varied progressively from 1 to 8 items , as in the digit spans used in neuropsychological tests. Trial length 1 was only used as a warmup. A block automatically stopped after four errors within a given list length an error was simply the incapacity of the participant to recall back the sequence entirely in perfect order. Participants were given four trials per length L.
They were also informed that the first three trials in each block would be treated as practice trials and then discarded from the analysis.
After this warmup, there was four trials per list length in each condition. When the task was a simple span task, there was a ms inter-item interval. When the task was a complex span task, we used the operation span OS task procedure. In OS, participants are required to perform mathematical operations between memory items see Conway et al. An equation was displayed on the screen e. The participant had three seconds to judge the equation by clicking a button true or false , before the next item was displayed.
The equation disappeared after the participant made a response, just before the next item was displayed. This interleaved processing task was thought to prevent participants from chunking freely.
For the non-chunkable simple span, for a given list length, the most incompressible lists alternated with less incompressible lists; otherwise, chunks would have exhibited too much similarity across the experiment. For instance, in Figure 1 , Trial 10nc shows the most incompressible three-object set, with a first 2-feature difference size and color, between the little white square and the large black square followed by a second 2-feature difference size and shape, between the large black square and the small black triangle , whereas Trial 9nc shows a less incompressible 3-object set, ordered using a 3-feature difference followed by a 2-feature difference to make the chunking process harder.
The inter-item distance the summed number of feature differences between objects is convenient to describe the relationships between features, but Feldman , describes more precisely how the features can be redescribed to compress the sum of information in each set of objects the compression process is not always related to inter-item distance. Here, for instance, the overall description of the three objects in Trial 9nc requires a minimal logical expression of 5 features, instead of 8 features for 10nc; see Feldman, Overall, all of the category structures of a given length were chosen to be less compressible in the non-chunkable condition than in the chunkable condition.
To compute an estimate of the span in each condition, a value of. Concurrent task. After averaging by participant, a paired-samples t -test on accuracy at the concurrent task i. No significant linear trend using repeated-measures ANOVA with a polynomial linear trend, or simply using correlations was found between length of the memoranda and accuracy for both the chunkable and nonchunkable conditions again, after averaging by participant.
Effect of task procedure and category-set complexity. In the simple-span and complex-span conditions, the mean spans were 3.
The mean spans when the lists of objects were chunkable and non-chunkable were 3. The spans in all four conditions are shown in Table 1 and Figure 2. Mean span and standard errors , by procedure simple vs.
The global chunking scores are simply based on the two average values of the same line in the table e. The individual chunking scores were figured out on a ratio separately for each participant standard errors are in parentheses. Mean span by procedure simple vs.
This interaction seems to suggest that the chunking benefit was greater for simple vs. However, a better way to test the benefit of information compressibility is to calculate how much information could be packed in diverse conditions compared to a baseline. Here, if the increased span due to chunking was 2.
The expected number 2. Given a baseline capacity of 1. Since the observed increase was actually 4. The next analysis therefore tested whether more information was compressed in the chunkable stimuli of the simple span task condition, which would be the case if the span in this condition significantly exceeded a simple multiplicative effect. Multiplicative chunking effect test.
To test whether compressibility had roughly the same effect in the simple and complex-span tasks, we measured the average chunking performance for the entire task as the following ratio: the average span of the chunkable condition divided by the average span of the non-chunkable condition, for each type of task.
The span here still refers to the one computed using the method described above in subsection Scoring. We obtained the following average chunking performance the four following numbers are the average span values for the four conditions obtained across participants : Simple span: 4.
Compressibility in the lists of objects had therefore a multiplicative effect on recall, insofar as participants recalled about 1. Individual chunking performance were then calculated as follows for every participant: the span of the chunkable condition divided by the span of the non-chunkable condition, for each type of task.
These individual chunking scores were submitted to a Bayesian analysis. Using JASP, 3 we tested the null hypothesis that the chunking scores of the simple and complex tasks would be the same. The appropriate alternative hypothesis was that the chunking score for complex span tasks is lower than the chunking score for simple span tasks, i.
The Cauchy Prior with a width of. With that alternative hypothesis using just half of the Cauchy Prior , the Bayes factor was A less theoretically-guided test would be a two-tailed test, the alternative now being that the chunking scores could differ in either direction.
With that alternative hypothesis, the data came out with a Bayes factor of 7. Both exceed the ratio of three that seems to be a standard convention for a sufficiently decisive finding. This other surprising result leads us to believe that chunking in immediate memory is a deep process that is resistant to interleaved processing tasks, at least deeper than what our initial hypotheses suggested. To ensure that other effects that we report below produce a Bayesian ratio of 3. We obtained a Bayes factor of 1.
Also, the Bayes factor for this alternative model against a model without the interaction component was 1. Other simpler models one model including only an effect of task procedure, another model including only an effect of category-set complexity, or else another model including both effects without interaction were all much better than the null model, but there was more evidence for the full model. We found no fundamental difference between the simple and complex span tasks regarding the ability to recode information.
We showed that a chunking process can operate in complex span tasks in a manner comparable to simple span task inasmuch as we found a ratio of items retained of about 1.
Compression seems more ubiquitous than predicted, meaning the formation of chunks can presumably occur even in complex span tasks. This could mean that interleaved processing tasks simply reduce the time to process the memory items and that chunking and related recoding processes can still sufficiently occur even in complex span tasks and therefore may be more automatic than expected. In other words, the more limited time to process the memory items during a complex span task than during a simple span task could simply impair chunking proportionally.
The idea that chunking can occur in complex span tasks is plausible. The authors showed that, although not presented in immediate succession because of the concurrent task, the different constitutive elements of a stimulus list of letters could be recognized as chunks. Because several items need to be reactivated for potential chunking, the authors concluded that complex attentional processes must be at play in working memory.
Our finding completes the results of this previous study by showing that new chunks can be formed within the attentional constraints imposed by complex span tasks. Secondly, we showed that the average span was less than three objects using a simple span task with incompressible sequences. It might be possible that participants retained about four objects, but some objects might not be perfectly encoded. One could argue that such a rule could have simplified the recall process with limited chunking during encoding for a particular trial since the rule could be used across trials.
If true, we believe that it still does not undermine the idea that chunkability was manipulated in our experiment. Although such a rule can help determine information to encode a chunkable set of items this is actually the core of our hypothesis that compressible structures represent less information than the original sets of individual items , one important aspect of our method is that the task required to reconstruct order.
This reconstruction process required to encode information about the entire sequence of items. But more importantly, if such a rule makes some of the potential answers ineligible, it does not simplify encoding until the participants is presented with the particular sequence of objects within which the specific order is unique. Also, a general rule could also be applied to nonchunkable series since there were never two consecutive objects related by more than one feature in common.
If such rules were used, they could therefore be used no matter the condition, but their use would have implied rapid deductive logic which does not seem easier than just encoding objects as they come.
Still, we reanalyzed our data published in Cognition in to provide further evidence that our procedure is suitable. The reason is that the chunkable trials were randomly mixed with nonchunkable trials in this old set, which supposedly made the use of a general rule applied to a given condition unstrategic.
We selected the exact same trials and we adopted a scoring procedure that fitted both data sets. The result shows strikingly similar performance in the simple span task conditions, because the old experiment did not use a complex span task between the two data sets: we observed a span of 2.
Finally, aside from potential alternative explanations for why our chunkable series were recalled more easily, our result more generally focuses on whether information can be manipulated in complex span tasks in the same extent as in simple span tasks.
We observed that effects of information compression contributed to performance levels to a similar extent in simple and complex span tasks. Chunking processes could still operate in complex span tasks, whereas such tasks were designed to divide processing of the memoranda by imposing an additional processing task performed in-between each of the to-be-recalled items. If information can be manipulated in complex working memory span tasks, it simply means that attention might not be directed away from storage completely in such tasks and that undivided processing of the memoranda is not needed to form a new chunk.
This result suggests that i neither simple nor complex spans can be assumed to reflect working memory capacity in the absence of chunking ii the difference between the two types of task must rest in some mechanism other than prevention of mnemonic strategies by the interleaved processing task in complex span. Our conclusion is that observation of the spans in non-chunkable vs.
One potential limitation of the present study is that visual objects were used as memoranda. Even though visuo-spatial complex span tasks have been used in the literature e. After citing research indicating evidence from visuospatial tasks that complex spans and simple spans are more correlated than for verbal tasks, Kane et al. In that respect, we think that visuo-spatial tasks can sufficiently help discriminate the two types of tasks.
However, data from further experiments using verbal material would help extend our result. In the future, it could also be important to study other presentation time lags. We believe that a compressibility-based approach could account for a connection between these processes. Conversely, not all compression processes are as simple as a chunking process since many options exist to recode information.
This method corresponds to the all-or-nothing method described in Conway et al. Avons, S. After reading it out loud, you should then decide whether the given answer is correct or incorrect.
If the problem is correct, respond yes ; if the answer is incorrect, respond no. You will then see a word. Read the word out loud. You will then see another math problem, which you will read out loud and then indicate whether the answer is correct or not. Words and math problems will alternate. At some point, you will be asked to recall all the words from the series.
This means you should indicate the order in which the words were presented. Being correct means that you click on the buttons in the same order the items appeared in the sequence. Any mistake recalling too many items, recalling too few items, or recalling items in the wrong order counts as incorrect.
There is no way to correct mistakes, so be careful! At the end of the experiment, you will be asked if you want to save your data to a set of global data. After you answer the question, a new Web page window will appear that includes a debriefing, your data, your group's data, and the global data. If you are using a tablet, tap the Start Next Trial button to begin. Tap either the Yes or No button to answer the math problem.
Tap the other response buttons to recreate exactly the sequence of items you just saw. If you are using a computer, click the Start Next Trial button to begin. Click either the Yes or No button to answer the math problem. Click the other response buttons to recreate exactly the sequence of items you just saw. This is optional. Whatever you choose, CogLab will save your individual data and record that you have completed the lab.
0コメント