Abstract
Purpose
Current Army doctrine stresses a need for military leaders to have the capability to make flexible and adaptive decisions based on a future unknown environment, location and enemy. To assess a military decision maker’s ability in this context, this paper aims to modify the Wisconsin Card Sorting Test which assesses cognitive flexibility, into a military relevant map task. Thirty-four military officers from all service branches completed the map task.
Design/methodology/approach
The purpose of this study was to modify a current psychological task that measures cognitive flexibility into a military relevant task that includes the challenge of overcoming experiential bias, and understand underlying causes of individual variability in the decision-making and cognitive flexibility behavior of active duty military officers on this task.
Findings
Results indicated that non-perseverative errors were a strong predictor of cognitive flexibility performance on the map task. Decomposition of non-perseverative error into efficient errors and random errors revealed that participants who did not complete the map task changed their sorting strategy too soon within a series, resulting in a high quantity of random errors.
Originality/value
This study serves as the first step in customizing cognitive psychological tests for a military purpose and understanding why some military participants show poor cognitive flexibility.
Keywords
Citation
Moten, C., Kennedy, Q., Alt, J. and Nesbitt, P. (2017), "Analysis of performance on a modified Wisconsin Card Sorting Test for the military", Journal of Defense Analytics and Logistics, Vol. 1 No. 1, pp. 34-46. https://doi.org/10.1108/JDAL-05-2017-0007
Publisher
:Emerald Publishing Limited
Copyright © 2017, In accordance with section 105 of the US Copyright Act, this work has been produced by a US government employee and shall be considered a public domain work, as copyright protection is not available.
Introduction
The US Army published its operating concept which describes how the Army will operate at the strategic, operational and tactical level without knowing much about the future environment, location and enemy (USA Department of the Army Training and Doctrine Command, 2014). To accomplish this objective, the training for Army officers must focus on adaptive decision-making through realistic training in actual and virtual environments (USA Department of the Army Training and Doctrine Command, 2014). According to Army doctrine, a key conceptual component for Army leader’s intellectual ability is mental agility. Mentally agile leaders can anticipate and adapt to a given situation to make the best decision (USA Department of the Army Training and Doctrine Command, 2012). For example, the type of operations executed in Iraq and Afghanistan required military leaders to daily assess the situation in their environment and make the necessary changes to their tactics for survival (Brown, 2007; Hartman, 2008; Mulbury, 2007). In psychology and neuroscience, this ability is known as cognitive flexibility and has been tested in multiple laboratory environments (Vartanian and Mandel, 2011). Although there are laboratory-based tests that measure cognitive flexibility, they are not directly applicable to military training needs (Perla, 1990). Thus, the purpose of this study was to:
modify a current psychological task that measures cognitive flexibility into a military relevant task requiring decision makers to overcome their experiential bias; and
understand underlying causes of individual variability in the cognitive flexibility behavior of active duty military officers on this task.
One common psychological task of cognitive flexibility is the Wisconsin Card Sorting Test (WCST) (Grant and Berg, 1948). The WCST taps the working memory, set-shifting and inhibition components of executive function. Participants view five cards, one card displayed at the top center of each screen; the remaining four displayed across the bottom of the screen. Each card contains symbols that vary in number, shape and color. Over several trials, participants try to figure out the matching rule that will correctly match the card on top of the screen with one of the four cards at the bottom of the screen. Unbeknownst to participants, the matching rule changes once they have ten consecutive correct matches. For example, after ten consecutive correct matches based on the color of the symbols, the matching rule could then change to the number or shape of the symbols. Thus, participants must not only learn and maintain in working memory the correct matching rule while inhibiting irrelevant stimuli but also exhibit cognitive flexibility in detecting when the rule has changed and adapting their selections accordingly (Grant and Berg, 1948). A participant finishes the task when they complete two rounds of each matching rule for a total of six rules or complete a total of 128 trials. Successful performance on the WCST requires both set switching (switching to a new sorting rule based on feedback) and set maintenance (maintaining the appropriate strategy long enough to reach the next sorting rule) (Barceló and Knight, 2002; Huizinga and van der Molen, 2007; Miyake et al., 2000). Based on these findings, we examined cognitive flexibility performance regarding both set maintenance and set shifting.
Method
Participants
Thirty-four military officers (11 USAM.C., 10 US Navy, nine US Army, three US Coast Guard and one US Air Force) with a mean age of 35.11 years (s = 4.90) completed our modified version of the WCST, named the map task. The mean time in service was 12.7 years (s = 4.42); mean time deployed was 19.57 months (s = 12.12) (note: one participant did not report their deployment time). Of the 31 participants with deployment experience, the time since their last deployment was 37.98 months (s = 25.18), and 19 of those deployments were to ground combat zones (Iraq or Afghanistan). Over 70 per cent of the participants served as staff officers during their most recent deployment. The majority of the participants were male (30 males, four females). Participants also had normal ranges of visual processing speed (Trails A mean score (s) = 22.60 (6.29) s; Trails B mean score (s) = 44.04 (20.13) s) (Tombaugh, 2004; Grant and Berg, 1948) and working memory (Digit span forwards mean (s) = 11.44 (2.11) s; Digit span backwards mean (s) = 9.53 (2.43) s) (Lezak, 1995; Weschler, 2008). The race and ethnicity of the participants were not noted. All participants were completing the master’s degree.
Map task (modified Wisconsin Card Sorting Test).
We developed the map task in consultation with military advisors. Based on these consultations, we decided to not use the standard WCST symbols to test the officers. Instead, we decided upon using military graphical symbols because military decision makers commonly reason over maps with a set of common graphical terms and symbols, providing a familiar context for most military officers. On a computer screen, participants saw five maps, analogous to the cards for the WCST, in which one map is at the top center of the screen, and the remaining four are across the bottom of the screen (Figure 1). Each map contains military graphics that vary in meaning, color and shape (US Department of the Army, 2004). Graphics have three different categories distinguishable by their color: friendly force (blue), type of proposed action such as ambush (black) and type of enemy force (red). Each of these categories has three different possible shapes. Each shape indicates a particular type of friendly force (rectangle and circle), intended action (lines and arrows) or enemy force (diamond). These shapes and colors are standard graphics that are familiar to military officers and would require no additional instructions of their basic meaning. To put it another way, these graphics are just as familiar to military officers as the stars, squares and circles are to the general population. We also increased the difficulty of the map task with a level that represents the absence of a graphic or shape (Figure 2).
The map task is more relevant to military leaders than the original WCST because it provides a specific military context by using symbols associated with military operations. Military leaders bring preconceived biases from experience when looking at these symbols. Military leaders must also be adaptive, that is they must be able to realize that strategies based on their previous experience and training are not working in the current situation and be willing to try a new approach. The speed in which they enact this cognitive flexibility can have serious consequences. The original WCST does not invoke these preconceived biases and is likely to provide an overly optimistic assessment of cognitive flexibility in this domain specific case. So we would anticipate that our subjects would perform more poorly on this more difficult task that their normed group from the WCST.
Similar to the method of Nelson (1976), we reduced the matching criteria on the map task to the type of graphic: friendly, intent or enemy. For example, if the current matching rule is friendly graphics, and the top map was shown is similar to the card in Figure 1, then the correct choice would be to choose the map in the lower left-hand corner of Figure 1. One additional modification is that not all maps have all three types of graphics, and participants can match maps based on the absence of graphic type (Figure 2).
Like the original WCST, participants only receive instructions that they must match one of the maps on the bottom to the one on the top. Thus, they only receive information by sampling each option and collecting an observation. Unbeknownst to the participant, the matching rule changes once the participant has ten consecutive correct matches. As in the WCST, the task finishes when either the participant has completed two rounds of each matching rule for a total of six rounds or until they have exhausted 128 trials.
We measured participant performance using typical WCST metrics:
Total number of trials: Defined as the total number of trials needed to complete the task. The maximum number of trials a participant can achive is 128.
Per cent correct: Defined as the total number of trials with a correct match divided by the total number of trials measure.
Per cent perseverative responses: Defined as the number of incorrect responses that would have been correct for the previous matching rule. For instance if the current matching rule is for a friendly (blue) symbol and a players continues this pattern 11 trials in a row with the previous ten matches being correct, then the 11th trial will count as a perseverative error.
Per cent non-perseverative error: Defined as all other incorrect responses after excluding perseverative errors divided by the total number of trials measure.
Number of trials to complete the first matching rule: defined as the number of trials necessary for the participant to change from the first to the second matching rule. The lower number possible is 10.
Number of rules achieved: Defined as the number of matching rules the participant achieved. The maximum number of rule changes a participant can achieve is six.
Failure to maintain set: Defined as having five or more consecutive correct trials without completing that rule. In other words, the participant made an error during the middle of a streak of correct matches that would prevent them from advancing to the next matching rule.
With our modified task, we expect the map task to be more challenging and more aligned with real world military decision-making; thus, we conducted a deeper analysis of participant error. Similar to the method of Barceló and Knight (2002), we decomposed non-perseverative errors into efficient and random (non-efficient) components as indices of set switching and set maintenance (Huizinga and van der Molen, 2007). Specifically, we score an efficient error when a participant achieves a perseverative error, and then make a correct match within the next three trials. To put it another way, the participant realized their current matching strategy is no longer valid, and so they carefully choose between the remaining alternatives to find the correct new strategy. In contrast to an efficient error, a random or a non-efficient error occurs when a participant does not correctly identify the new matching pattern within three trials after achieving a perseverative error or when the participant gives incorrect response on a trial after the participant achieved a correct response on the previous trial (Barceló and Knight, 2002).
We index set-switching by perseverative errors and efficient errors in which fewer perseverative errors and greater efficient errors indicates better set switching. We index set maintenance by non-efficient errors in which fewer non-efficient errors indicate better set maintenance (Huizinga and van der Molen, 2007).
Measures
Demographic survey. Age, gender, service branch, rank and deployment experience were captured in the demographic survey.
Post-task survey. Participants completed a free response question regarding the map feature on which they sorted and an ordinal scale question regarding how quickly they realized the sorting rule had changed:
immediately/after 1-2 trials;
after a few trials (3-4 trials);
after several trials (5+ trials); and
did not realize the sorting rule had changed.
Visual-motor speed and working memory
We used the Trail A, Trail B, Digit Span Forward and Digit Span Backwards tests as a surrogate measure for IQ, typically used to analyze the variance in performance on the WCST. First, all the participants were highly educated officers selected that were completing requirements for a master’s degree; therefore, it is unlikely that a high amount of variability exists in IQ among this sample of officers. Second, successful performance on the WCST relies heavily on working memory as one needs to maintain in memory, one’s previous selection (s) to arrive at the correct sorting pattern. There also is some evidence that working memory, in turn, is affected by processing speed (Bayliss et al., 2003).
Trails A and B.
Because the map task places demands on visual processing speed, we included Trails A and B tests as covariate measures (Weschler, 2008). In Trail A, the numbers 1 through 25 are randomly distributed on a worksheet. The participant starts at 1 and must draw a line to each number in numerical order. Participants are instructed to work as quickly and accurately as they can. In Trails B, participants now see both numbers and letters and must connect 1 to A, A to 2, 2 to B and so on until they reach L and then 12. They also are instructed to work as quickly and accurately as they can. The test-retest reliability on these measures ranges from 0.76 to 0.94 (Wagner et al., 2011). In the current sample, performance on Trails A and B was moderately correlated (r = 0.506, p = 0.003). Trails A and B have age and education based norms; these norms were used in computing Trails A and B performance in the current sample (Tombaugh, 2004).
Digit span forwards and backwards.
Because the map task also relies on working memory, the digit span forwards and backward test from the Wechsler Adult Intelligence Scale (WAIS-IV) also were included as covariates (Weschler, 2008). In digit span forwards, the experimenter states a series of digits, starting with two digits, and the participant must repeat them back. The number of digits increases, with two trials per number of digits. The test is discontinued if the participant has an incorrect response to both trials for a particular number of digits or reaches the maximum of eight digits. In digit span backward, the same procedure is followed, except this time, the participant must repeat the digits in the reverse order up to a maximum of eight digits. Test-retest reliability of the digit span measures ranges from 0.66 to 0.89 (Lezak, 1995). In the current sample, performance on digit span forwards and backward was positively correlated (r = 0.350, p = 0.042).
Statistical modeling techniques
A combination of factor analysis and k-means clustering was used to explore the data and determine whether distinct groups of performers existed. Factor analysis indicated that non-perseverative error was the highest loading variable with a value of 0.99. Next, cluster analysis produced three different groupings based on non-perseverative error score: the low error cluster (n = 8), moderate error cluster (n = 12) and high error cluster (n = 14) [Figure 3(a)]. Mann–Whitney tests with a two-tailed alpha level of 0.05 were used to compare the performance of high and low cognitive flexibility performers on all assessed measures because the data were not normally distributed. The effect size was computed by dividing the test statistic by the square root of the tested sample and assessed using Cohen’s criteria.
Procedures
The institution’s IRB approved the study. Participants attended the laboratory individually for a single testing session. They first completed the approved consent form, then the demographic survey, Trails A and B and Digit Span tests. Participants then sat at a standard desk and completed the computerized map task as if they were informing, yet removed from, tactical operations from a military operations center. Finally, participants completed the post-task survey questionnaire.
Results
Table I shows the summary statistics of the map task results. Performance on most measures is consistent with results from the original WCST including a sample of veterans of similar age to our participants (Shan et al., 2008; Shura et al., 2015). However, percentage of non-perseverative error was higher than that of previous studies (Shan et al., 2008): mean of 20.86 per cent (SD = 13.57 per cent).
Cluster analysis results
Cluster analysis revealed three clusters of participants based on the percentage of non-perseverative errors. All participants in the low error cluster, one participant in the moderate error cluster and no participants in the high-error cluster completed all six-rule changes of the map task. Figure 3(b) shows the number of trials participants required to complete the first matching rule clustered by total non-perseverative errors. All participants in the low and moderate error clusters and only 10 of the 14 participants in the high-error cluster completed the first matching rule. Furthermore, Figure 3(c) indicates that all participants in the low- and moderate-error clusters and only two of the 14 participants in the high-error cluster completed the first three matching rules. Figure 3(d) displays that all eight participants in the low error cluster, six of the 12 participants in the moderate error cluster, and only one of the 14 participants in the high-error cluster completed the first five matching rules.
We further classified participants as high or low performers. High performers completed all five rule changes; low performers completed less than five rule changes. All but one high performer was categorized into the low-error cluster group. The nine high performers had a total number of non-perseverative errors that were significantly lower than that of low performers (Mhigh = 14.11, Mlow = 51.84, z = –4.4, p < 0.0001, effect size = 0.753). As expected, the high performers needed fewer trials to complete the first matching rule than the lower performers (Mhigh = 17.89, Mlow = 53.72, z = –3.4, p < 0.002, effect size = 0.583). Analysis of the failure to maintain set metric produced a non-significant difference on this measure between high and low performers (Mhigh = 2.56, Mlow = 2.24, z = 0.42, p = 0.69, effect size = 0.07). Exploratory analyses indicated that high vs low performers were similar on their performance on Trails A and B, and Digit Span Forward and Backward (all p’s > 0.17; all effect sizes < 0.24).
Non-perseverative error analysis
We next sought to better understand the variability in non-perseverative error rate. Significant associations were found neither between non-perseverative error rate with performance on digit span and Trails A and B tests nor with ground combat experience (all p’s > 0.22). Next, we decomposed non-perseverative errors into efficient or non-efficient errors (Table II). As expected, high performers achieved a change in matching rule efficiently, whereas low performers shifted to a new rule too soon in the current series. Although there was no significant difference between high and low performers in the average number of efficient errors (Mhigh = 2.33, Mlow = 1.76, z = 1.29, p = 0.21, effect size = 0.22), high performers had, on average, significantly higher perseverative errors, (Mhigh = 3.67, Mlow = 1.68, z = –3.27, p = 0.0006, effect size = 0.56) and significantly lower non-efficient errors (Mhigh = 10.67, Mlow = 49.32, z = –4.39, p < 0.0001, effect size = 0.75). Thus, high performers had better set maintenance and set switching than low performers.
Post-task survey
We also analyzed the responses of participants in the post-task survey based on their cluster groupings. Variability in the self-reported realization of rule change increased from the low error to high error cluster group. All low error cluster participants reported that they realized a rule change within 1-2 trials. The moderate error cluster reported the following:
four participants realized a rule change within 1-2 trials;
seven participants realized a rule change within 3-4 trials; and
one participant reported noticing a rule change in 5+ trials.
For the high-error cluster group: three participants reported realizing a rule change within 1-2 trials, three participants within 3-4 trials, four participants within 5+ trials, and four participants did not realize the rules changed at all.
Discussion
Military operations require leaders to have agile and adaptive decision-making skills. However, current military training typically does not focus on training cognitive functions necessary for optimal decision-making, such as cognitive flexibility. The purpose of this study was to create a military relevant measure of cognitive flexibility and to understand underlying mechanisms in variability in cognitive flexibility performance that could aid military decision-making training. Toward meeting these goals, we modified the WCST into the military relevant map task. Although adequate performance was met regarding percentage of correct responses and number of rules obtained, we were surprised by the high frequency of non-preservative errors compared to studies using the original WCST (Nelson, 1976; Ozonoff, 1995; Barceló and Knight, 2002; Kado et al., 2012). The high frequency of non-perseverative errors cannot be explained by poor working memory or processing speed. Thus, this performance measure required a more detailed analysis.
One potential explanation for the high number of non-perseverative errors is that contrary to the original WCST, the military symbols on the map task have a specific meaning and experienced officers could read each card as a military operation. The symbols on the map task are primarily ground-based, and this could result in officers’ familiar with these symbols to attempt to match the cards as a type of military operation and not just simply matching on the correct symbol color. However, no significant difference in non-perseverative error rate was found among participants who previously had a ground combat deployment, and those did not. Future data collection will entail collecting additional information about subjects’ ground-based military operations experience to test this idea.
As we could not find an association with military experience and non-perseverative error, we further analyzed the components that defined non-perseverative error – specifically, efficient error and non-efficient error. Before analyzing the different types of error, we wanted first to determine whether specific taxonomies existed for non-perseverative error. Using factor and cluster analysis, we determined that all participants neatly group into one of three clusters (high, moderate and low). We then analyzed the participants that completed the map task to determine if these taxonomies could relate to mapping task performance. Indeed, we found a relationship in that the low error cluster captured all but one participant that completed the map task.
Next, we decomposed non-perseverative errors into efficient and non-efficient errors and compared the results of the high and low performers. This analysis was critical in understanding participant performance in the map task because a surface analysis of the data would seem to indicate the map task measures cognitive flexibility differently than the original WCST. To clarify, we would expect successful map task participants to have lower perseverative errors and failure to maintain set errors whether they were taking the original WCST; however, the map task initial results seemed to indicate an opposing trend.
After analyzing the high number of non-perseverative errors, we conclude that the map task does appropriately measure cognitive flexibility. The data suggest that after committing a perseverative error, high performers explore the other choices to reasonably determine a new matching rule is in effect ultimately making the correct choice within three or fewer trials. Hence, these participants demonstrated adequate cognitive flexibility by exhibiting both set switching and set maintenance (Huizinga and van der Molen, 2007). Additionally, the higher number of perseverative errors for the high performers was due to them achieving more matching categories than the lower performers. In contrast, low performers showed especially poor set maintenance as indicated by a high quantity of non-efficient errors. The result of this pattern is that these officers (low performers) are switching decision-making tactics too soon to see whether a particular tactic works. This finding is essential because, during combat operations, military leaders are expected to make appropriate and sound changes to their tactical actions based on a dynamic environment. Therefore, in contrast to the original WCST, the map task requires more thorough analysis of non-perseverative error to determine the proper application of cognitive flexibility.
Limitations
This study had several limitations. First, the participants were a sample of convenience that consisted primarily of military officers attending the Naval Postgraduate School. Further studies should examine cognitive flexibility among a wider range of military personnel ranks and career fields. Second, we neither did use the original WCST to determine the initial perseverative tendencies of the participants nor did we ask subjects about their perseveration strategy on the post-task survey. Later, studies can use either the original WCST or a suitable alternative test, to determine the participant’s perseverative tendencies and the post-task survey for the map task can capture the perseveration strategy used by the map task participants. Finally, research is needed to determine the specific relationship between expertise and cognitive flexibility with regards to military training. This is important because some studies have indicated that both experts and novices have a tendency toward cognitive entrenchment especially when faced with repetitive situations (Chi, 2006; Dane, 2010; Lewandowsky et al., 2007; Sternberg, 1996). In other words, experts and novices can have tendencies to follow certain patterns of thought with little to no interest in exploring alternatives when a new alternative could provide a better solution. It is not clear, however, that this conclusion applies to experts in general. Other research, for instance, shows that experts can overcome this bias with a greater level of experience (Bilalić et al., 2008; Rabinowitz, 1993). Research in this area would help to provide more insight in how to identify these tendencies and train military members on how to become more cognitively agile in new and dynamic environments.
Conclusion
In sum, adaptive decision-making ability, specifically cognitive flexibility, is an essential trait for military leaders to possess. This study was the first of its type to address this gap of knowledge by adapting the WCST, a well-known measure of cognitive flexibility, for a military context. We specifically focused on the set switching and set maintenance executive functions. The implications of this study are that:
set maintenance may be a skill that is currently undertrained among military officers; and
cluster analysis by non-perseverative error rate is a parsimonious method for potentially identifying officers who require additional screening to determine whether additional cognitive flexibility training to allow them to think “outside the box” is appropriate.
Figures
Figure 3.
Strip charts are displaying the distribution of map task participants clustered by (a) non-perseverative error, (b) number of trials to complete first rule, (c) number of trials to complete third rule, and (d) number of trials to complete fifth rule. The x-axis represents the total number of non-perseverative errors, and the y-axis represents the cluster label for a group of participants
Summary statistics of map task decision performance variables
Test metric | Mean | SD |
---|---|---|
Number of trials completed | 119.35 | 16.53 |
Per cent correct | 0.65 | 0.15 |
Per cent perseverative errors | 0.06 | 0.08 |
Per cent non-perseverative errors | 0.34 | 0.16 |
Number of trials to complete first matching rule | 42.97 | 28.95 |
Number of rules achieved | 3.20 | 1.93 |
Failure to maintain set | 2.32 | 1.49 |
Descriptive statistics of error type by cluster group and performance group
Performance groupings | Non-perseverative error | ||||||
---|---|---|---|---|---|---|---|
Error clusters/performance groups | Efficient | Non-efficient | Perseverative error | ||||
Mean | SD | Mean | SD | Mean | SD | ||
Low error cluster (n = 8) |
2.25 | 1.16 | 9.00 | 4.17 | 3.75 | 0.71 | |
Cluster group by non-perseverative error rate | Moderate error cluster (n = 12) |
2.33 | 1.43 | 31.75 | 5.12 | 2.92 | 1.08 |
High error cluster (n = 14) |
1.36 | 1.01 | 62.57 | 11.73 | 0.71 | 0.91 | |
High vs low cognitive flexibility performers | High performers (n = 9) |
2.33 | 1.12 | 10.67 | 6.34 | 3.67 | 0.71 |
Low performers (n = 25) |
1.76 | 1.30 | 49.32 | 17.79 | 1.68 | 1.49 |
Efficient errors occur when an incorrect response is given during the second trial of a new matching rule series; more efficient errors indicate better set switching. Non-efficient errors are an incorrect response on a trial after the participant achieved a correct response on the previous trial; fewer non-efficient errors indicate better set maintenance (Barceló and Knight, 2002)
References
Barceló, F. and Knight, R.T., (2002), “Both random and perseverative errors underlie WCST deficits in prefrontal patients”, Neuropsychologia, Vol. 40, pp. 349-356.
Bayliss, D.M., Jarrold, C., Gunn, D.M. and Baddeley, A.D. (2003), “The complexities of complex span: explaining individual differences in working memory in children and adults”, Journal of Experimental Psychology: General, Vol. 132 No. 1, pp. 71-92.
Bilalić, M., McLeod, P. and Gobet, F. (2008), “Inflexibility of experts—Reality or myth? Quantifying the Einstellung effect in chess masters”, Cognitive Psychology, Vol. 56 No. 2, pp. 73-102.
Brown, R.B. (2007), “The agile-leader mind-set: leveraging the power of modularity in Iraq”, Military Review, Vol. 87 No. 4, p. 32.
Chi, M. (2006), “Two approaches to the study of experts’ characteristics”, in Ericsson, K.A., Charness, N., Feltovich, P. and Hoffman, R.R. (Eds), The Cambridge Handbook of Expertise and Expert Performance, Cambridge University Press, Cambridge, pp. 21-30.
Dane, E. (2010), “Reconsidering the trade-off between expertise and flexibility: a cognitive entrenchment perspective”, Academy of Management Review, Vol. 35 No. 4, pp. 579-603.
Grant, D.A. and Berg, E. (1948), “A behavioral analysis of degree of reinforcement and ease of shifting to new responses in a Weigl-type card-sorting problem”, Journal of Experimental Psychology, Vol. 38 No. 4, p. 404.
Hartman, W.J. (2008), Exploitation Tactics: A Doctrine for the 21st Century, US Army Command and General Staff School of Advanced Military Studies, Fort Leavenworth.
Huizinga, M. and van der Molen, M.W. (2007), “Age-group differences in set-switching and set-maintenance on the Wisconsin card sorting task”, Developmental Neuropsychology, Vol. 31 No. 2, pp. 193-215.
Kado, Y., Sanada, S., Yanagihara, M., Ogino, T., Ohno, S., Watanabe, K., Nakano, K., Morooka, T., Oka, M. and Ohtsuka, Y. (2012), “Executive function in children with pervasive developmental disorder and attention-deficit/hyperactivity disorder assessed by the Keio version of the Wisconsin card sorting test”, Brain and Development, Vol. 34 No. 5, pp. 354-359.
Lewandowsky, S., Little, D. and Kalish, M.L. (2007), “Knowledge and expertise”, in Durso, F.T. (Ed.), Handbook of Applied Cognition, 2nd ed., Wiley, Chichester, pp. 83-109.
Lezak, M.D. (1995), Neuropsychological Assessment, Oxford University Press.
Miyake, A., Friedman, N.P., Emerson, M.J., Witzki, A.H., Howerter, A. and Wager, T.D. (2000), “The Unity and diversity of executive functions and their contributions to complex “frontal lobe” tasks: a latent variable analysis”, Cognitive Psychology, Vol. 41 No. 1, pp. 49-100.
Mulbury, D.S. (2007), Developing Adaptive Leaders, a Cultural Imperative, US Army War College, Carlisle Barracks.
Nelson, H.E. (1976), “A modified card sorting test sensitive to frontal lobe defects”, Cortex; a Journal Devoted to the Study of the Nervous System and Behavior, Vol. 12 No. 4, pp. 313-324.
Ozonoff, S. (1995), “Reliability and validity of the Wisconsin card sorting test in studies of autism”, Neuropsychology, Vol. 9 No. 4, pp. 491
Perla, P.P. (1990), The Art of Wargaming, United States Naval Institute, Anapolis.
Rabinowitz, M. (Ed.) (1993), “Seeing the invisible: the perceptual cognitive aspects of expertise” Cognitive science Foundations of Instruction.
Shan, I., Chen, Y. and Su, T. (2008), “Adult normative data of the Wisconsin Card Sorting Test in Taiwan”, Journal of Chinese Medical Association, Vol. 71 No. 10, pp. 517-522.
Shura, R.D., Miskey, H.M., Rowland, J.A., Yoash-Gantz, R.E. and Denning, J.H. (2015), “Embedded performance validity measures with postdeployment veterans: Cross-validation and efficiency with multiple measures”, Applied Neuropyschology: Adult, available at: http://dx.doi.org/10.1080/23279095.2015.1014556
Sternberg, R.J. (1996), “Costs of expertise”, in Ericsson, K.A. (Ed.), The Road to Excellence, Lawrence Erlbaum Associates, Mahwah, NJ, pp. 347-354.
Tombaugh, T.N. (2004), “Trail making test a and B: normative data stratified by age and education”, Archives of Clinical Neuropsychology, Vol. 19 No. 2, pp. 203-214.
US Department of the Army (2004), FM 1-02, Operational Terms and Graphics, Government Printing Office, Washington, DC.
US Department of the Army Training and Doctrine Command (2012), ADRP 6-22, Army Leadership, Government Printing Office, Washington, DC.
US Department of the Army Training and Doctrine Command (2014), TRADOC Pamphlet 525-3-1, the US Army Operating Concept: Win in a Complex World, Government Printing Office, Washington, DC.
Vartanian, O. and Mandel, D.R. (2011), Neuroscience of Decision Making, Psychology Press.
Wagner, S., Helmreich, I., Dahmen, N., Klaus, L. and Tadic, A. (2011), “Reliability of three alternate forms of the trail making tests a and b”, Archives of Clinical Neuropsychology, Vol. 26 No. 4, pp. 314-321.
Weschler, D. (2008), Wechsler Adult Intelligence Scale–Fourth Edition (WAIS–IV), NCS Pearson, San Antonio, TX.
Further reading
Gallagher, P.S. and Prestwich, S. (2012), “Supporting cognitive adaptability through game design” 6th European Conference on Games Based Learning, p. 165.