Professional Documents
Culture Documents
A R T I C L E I N F O
A B S T R A C T
Article history:
Received 6 September 2012
Received in revised form 23 February 2013
Accepted 28 February 2013
Available online 29 March 2013
We developed a senior friendly suite of online games for learning with interactive calibration for
increasing difculty, and evaluated the feasibility of a randomized clinical trial to test the hypothesis that
seniors aged 6080 can improve key aspects of cognitive ability with the aid of such games. Sixty
community-dwelling senior volunteers were randomized to either an online game suite designed to
train multiple cognitive abilities, or to a control arm with online activities that simulated the look and
feel of the games but with low level interactivity and no calibration of difculty. Study assessment
included measures of recruitment, retention and play-time. Cognitive change was measured with a
computerized assessment battery administered just before and within two weeks after completion of
the six-week intervention. Impediments to feasibility included: limited access to in-home high-speed
internet, large variations in the amount of time devoted to game play, and a reluctance to pursue more
challenging levels. Overall analysis was negative for assessed performance (transference effects) even
though subjects improved on the games themselves. Post hoc analyses suggest that some types of games
may have more value than others, but these effects would need to be replicated in a study designed for
that purpose. We conclude that a six-week, moderate-intensity computer game-based cognitive
intervention can be implemented with high-functioning seniors, but the effect size is relatively small.
Our ndings are consistent with Owen et al. (2010), but there are open questions about whether more
structured, longer duration or more intensive games for learning interventions might yield more
substantial cognitive improvement in seniors.
2013 Elsevier Ireland Ltd. All rights reserved.
Keywords:
Memory
Cognition
Computer-based
Cognitive training
Age-related cognitive decline
1. Introduction
Cognitive decline, especially memory loss, is a frequent
complaint among older individuals. The term senior moment
has entered colloquial speech to describe the appearance of alltoo-common transient amnestic episodes, and a frequent unspoken fear is that a persons mild age-related memory loss might
soon become a devastating and progressive condition such as
Alzheimers disease (AD). The reality is that most older adults
experience only mild age-related cognitive declines as they age;
only 1 in 9 will develop AD by age 75 (McDowell, 2001).
Nonetheless, it is a source of both consternation and stress for
older adults, and as a result, much attention has been focused on
strategies for preventing or ameliorating this decline.
* Corresponding author at: B-444 Clinical Center, 788 Service Road, East Lansing,
MI 48824, United States. Tel.: +1 517 884 2482.
E-mail address: Andrea.bozoki@ht.msu.edu (A. Bozoki).
0167-4943/$ see front matter 2013 Elsevier Ireland Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.archger.2013.02.009
2. Methods
2.1. Study population
This study was designed as a prospective, single-blind (subjectblinded but not investigator), controlled study. The study
population was recruited from among community-dwelling
seniors in the greater Lansing region, a suburban and metropolitan
region of approximately 250,000 which is home to a large
university campus. Recruitment was via advertisements in the
local newspaper, at area churches and synagogues, and at
community centers frequented by seniors. Inclusion criteria
included: age over 60, vision and hearing intact enough to see
and hear computer-based stimuli, and daily access to an internetenabled computer. All prospective subjects were telephonescreened and excluded if there were indicators of non-normal
baseline cognition (prescriptions for cholinesterase inhibitors or
memantine, diagnoses of major psychiatric disease or dementia) or
major intercurrent illness/disability that would prevent compliance with the proposed intervention. At the time of enrollment,
subjects were additionally screened with the SLUMS (St. Louis
University Mental Status exam) (Tariq, Tumosa, Chibnall, Perry, &
Morley, 2006) and excluded from the study if score was <22
(reective of dementia in a college-educated individual).
We recruited a total of 60 subjects: 32 for the active and 28 for
the control condition. Sample size was based on a power analysis
with effect sizes from three positive cognitive intervention trials
(Basak et al., 2008; Mahncke et al., 2006; Smith et al., 2009). Effect
sizes between d = 0.24 and 0.7 were calculated for these
interventions, and we assumed 80% power to detect a difference
using one-way ANOVA at alpha = 0.05.
2.2. Study protocol
The study protocol was reviewed and approved by the Michigan
State University institutional review board for protection of human
subjects in research. After signing informed consent and undergoing screening, subjects were randomized to either the active
intervention or a control condition. In contrast to the experimental
online game condition, the control condition involved passive
computer-based online viewing of three modalities of daily online
news content: text with pictures, audio news stories, and video
news stories. Subjects were not informed that there was a control
group; all subjects believed themselves to be using a program that
might improve cognition over time if used faithfully. All subjects
were debriefed at the time of post-testing, and all subjects
regardless of condition were allowed free use of the MBM games
for an additional 3 months after the trial ended, as a token of thanks
for their participation.
In order to create participant blinding, both groups were given
identical information regarding the purpose of the experiment
(nding out what the effects of regular computer-based mental
activity are on thinking ability). Nonetheless, informed consent
materials did not mention the nature of the computer activities.
Both written and electronic materials included with the game
suite/control program encouraged participants to aim for 1 h of
program use on most days of the week (57).
Contingency management of online activities was put into
place for both the experimental and the control arm. Participants
knew that they would receive $5 for each week that they logged in
to the program at least 5 times for at least 30 min, and an
additional $30 at the end of the study if total log-in time exceeded
1600 min. Participant activity time was monitored weekly, and a
trouble-shooting/encouraging phone call was initiated by a study
staff member if less than 1 h of total activity time was logged for
that week.
the senior participants before, and again within 2 weeks after the
six-week intervention period. The CogState assessment consists of
adaptations of standard (paper-based) tests across a range of
cognitive functions including psychomotor speed, attention, decision-making, working memory and new learning. It has age- and
education-adjusted norms up through age 85, and has been well
validated in multiple trials (Fredrickson et al., 2010; Maruff, Falleti,
Collie, Darby, & McStephen, 2005; Weaver Cargin, Maruff, Collie, &
Masters, 2006). Additional advantages over paper-based psychometrics include ease of administration, brief duration (less than
25 min), and repeatability (Falleti, Maruff, Collie, & Darby, 2006), a
major concern in studies such as this, where practice effects over
relatively short intervals can articially improve performance. The
CogState battery used in this study had nine component sub-tests:
MON Monitoring (simple reaction time), IDN Identication
(visual-motor reaction time), CPAL Continuous Paired Association
Learning (visuospatial memory), CPAR Continuous Paired Association; Delayed Recall, OCL One Card Learning, OWL One Word
Learning, OWLR One Word learning; Delayed Recall, ONB Oneback (working memory) and PRD Card Prediction (strategic
problem solving).
2.6. Statistical analyses
For each subject, for each task where RT was the primary
outcome measure distributions of RTs were normalized using a
logarithmic base transformation before the mean RT was
computed. This transformation was applied because distributions
of RTs typically show negative skew especially if a participant
generates one or two very slow responses. Rather than lose data or
have to develop criteria for outlier scores in individual distributions, log10 transformation allows all RTs to remain in the
distribution while minimizing the effect of slow RTs on estimates
of the mean and variance.
For tasks were performance accuracy was the main outcome
measure (i.e. CPAL) normalization was done using an arcsine
transformation. Arcsine transformations promote normal distributions in proportional data by extending the possible values as
the top end of the data range. NB: For data reporting in Table 3, the
group mean performance measures of RT and accuracy scores were
back-transformed in order to allow readers to appreciate the study
effects in the original data units.
We did not evaluate both RT and accuracy for every variable. As
with prior studies utilizing CogState, we considered RT as the
primary outcome measure for those tests which primarily measure
speed-of-processing (IDN, MON, ONB) and were therefore at/near
ceiling on accuracy, and used accuracy for the remaining tests
(OWL, OWLR, PRD, OCL, CPAL and CPAR).
For all assessed variables in our CogState battery, we
implemented a standard three step analytical procedure: explore
(e.g. descriptive statistics), analyze/estimate (e.g. modeling) and
explore (e.g. analysis of residuals). After performing an initial
descriptive exploration via Tukey type plots and cross-tabulations
(not presented here), the analysis was performed in different ways.
First, we used a two sample t-test to compare means of differences
(result being dened as a difference (delta) between time 1 (t1)
and time 0 (t0), where delta = t1 t0 for test time and test
accuracy) on test results between the two groups. Second, we
performed an ANOVA, separately for each CogState variable,
wherein each subjects baseline score was added as a covariate,
rst alone and then with informative co-variates added. Informative covariates included in regression models were group
assignment (active/control), sex, SLUMS score (in points), education level (dened as years of completed education), employment
status (retired, part time employed, full time employed) and total
time spent on intervention games (in minutes). The approach
4
Table 1
Subject demographics.
N
Active
Control
Total
32
28
60
Sex
Age
Employment
% Male
Mean
SD
Retired
Part-time
Full-time
1315
16
17
1821
22+
Mean
SD
50.0%
32.1%
41.7%
67.2
70.8
68.9
6.37
6.81
6.76
65.6%
81.5%
72.9%
15.6%
7.4%
11.9%
18.8%
11.1%
15.3%
31.3%
17.9%
25.0%
18.8%
25.0%
21.7%
12.5%
14.3%
13.3%
12.5%
28.6%
20.0%
25.0%
14.3%
20.0%
27.5
27.0
27.2
2.20
2.12
2.16
Education (years)
SLUMS score
Table 2
Total activity time (min) per activity.
Keep it in Mind
Photoaw
Sokoban
Active
Mean
SD
160.6
115.3
455.3
410.6
335.4
290.5
355.8
301.2
Thghts. Motion
Sound Think.
Total
1307.2
Control
Mean
SD
237.2
259.6
368.7
366.3
396.3
383.3
1002.3
IDN rt 1
IDN rt 2
MON rt 1
MON rt 2
ONB rt 1
ONB rt 2
CPAL acc 1
CPAL acc 2
CPAR acc 1
CPAR acc 2
OCL acc 1
OCL acc 2
OWLR acc 1
OWLR acc 2
OWL acc 1
OWL acc 2
PRD acc 1
PRD acc 2
Active
Control
Mean
SD
Mean
SD
606
588
449
426
889
888
0.517
0.597
0.575
0.623
0.698
0.697
0.921
0.912
0.957
0.952
0.780
0.782
99
87
113
96
163
144
0.157
0.213
0.270
0.284
0.073
0.091
0.096
0.102
0.043
0.040
0.065
0.081
609
639
487
482
954
904
0.517
0.572
0.627
0.647
0.700
0.714
0.934
0.916
0.932
0.941
0.786
0.792
97
153
86
94
271
214
0.169
0.191
0.207
0.281
0.121
0.123
0.072
0.060
0.068
0.042
0.088
0.083
effects for both KIIM and Photoaw. For the working memory game
KIIM, we found robust effects on OWLR and PRD accuracy
(F = 4.816, p = 0.04 and F = 4.767 and p = 0.04). Similarly, regression of play time for the visual-spatial processing game Photoaw
resulted in signicant effects on IDN response time (F = 9.253,
p = 0.01). No other signicant effects were seen for individual subtest reaction-time or accuracy (Table 3).
4. Discussion
4.1. Feasibility
There are several facets of feasibility that deserve mention in
an initial overview of the study ndings. First, it is clearly possible
to design a suite of computer-based, cognitively stimulating
activities that seniors are willing to play for extended periods of
time: on average, subjects in our active group spent an average of
1300 min (>21 h) over 6 weeks. While our contingency payments
may have helped motivate some, the token nature of the sums ($5/
week, plus $30 for completion of the entire 6-week program)
makes it unlikely that money was a signicant factor in
performance. Rather, we believe that the reinforcing nature of
the computer activities themselves, combined with the senior
friendly interface (tested on several groups of seniors in focus
groups prior to implementation) was the critical factor. In
addition, we provided a weekly follow-up phone call to those
individuals who were not logging in for a minimum time each
week. These phone calls appeared to play a tangible role by
reminding some individuals of the availability of the program,
thereby helping them to make time for this task.
Second is the development of an appropriately engaging control
condition. None of the subjects who were randomized to our
control condition were aware that they did not use the active
program (as determined during the de-brieng session); they were
truly blinded, thus unlikely to drop out simply because they
suspected being in the control group. Accordingly, with respect to
creation of a suitable control condition, we succeeded in keeping
total log-in time equivalent between our groups. Nonetheless, we
had a drop-out rate of 10% for the active group but nearly 25% in the
less interesting control condition, and a total play time difference
of about 25%, both of which, while not statistically signicant in the
traditional sense, were substantial enough to be considered a
confound in our data analysis. They also serve as a caution that
larger studies of this kind will need to specically address this
issue, either by creating asymmetric contingencies between
groups (such that external rewards are greater for control
participants) or by being more prescriptive in the requirements
for game participation.
Another issue worth mentioning is the idea of differential
placebo effects based on the expectation of benet to the type of
task being evaluated. This is well-explained in a review by Boot and
Blakely (2011) and is essentially the notion that if a placebo
condition focuses on, say, visual learning, then subjects may do
better on subsequent visual memory tests simply because they
believe they should (face validity), as opposed to, say an auditory
memory test that is very different in structure from the placebo
task. We explicitly addressed this issue in our study by creating 3
different placebo control exercises, each mimicking aspects of one
of the active games.
4.2. Cognitive effects
Learning effects on the My Better Mind suite of games were
robust but not extreme. Because of the naturalistic set-up of our
paradigm, in which individuals had complete control over how
much they played of each game, and because several of the
4.3. Limitations
The My Better Mind suite of cognitive games included only four
tasks. While these were developed to address 4 different types of
cognitive abilities, they are by no means exhaustive. Our sample of
convenience recruited a relatively young-old, healthy, highly
educated and computer-savvy study population, which would all
work to exert a ceiling effect on expected improvement after the
intervention. In addition, a more prescriptive approach to playing
the games, including how much of which games to play, would
have produced a more consistent active-group intervention (albeit
a less naturalistic one). Our small sample size precluded analyses
looking for effects of one game on another (interaction effects), and
therefore conclusions related to the effects of individual games on
aspects of cognitive improvement may be premature. Finally, the
games as currently designed permit the player to constantly replay
the easiest rounds; a greater degree of forced movement to more
difcult challenges might produce more robust effects. We feel
that these insights are important to the design and implementation of next-generation, more effective computer-game-based
cognitive interventions.
5. Conclusions
It remains controversial whether and to what extent cognitive
decline due to aging can be improved (Lustig et al., 2009; Morrison &
Chein, 2011). Much prior work has focused on the use of brain
exercises, which appeal at both a theoretical level due to an
understanding of brain plasticity (Mahncke et al., 2006), and at an
intuitive level focusing on a use it or lose it hypothesis. However, it
is becoming apparent that simply putting elders in front of a
computer screen (or assigning them Sudoku puzzles and crosswords) and assuming that any form of computer-based interaction
will have benets is clearly false. There appear to be dose-response
effects, challenge-level (difculty) effects, and task effects that
interactively contribute to the overall results. In addition, benets
appear to be modest in relatively high-functioning seniors,
suggesting a ceiling effect to the use of such interventions (e.g.
Ackerman, Kanfer, & Calderwood, 2010). A very recent, thorough
meta-analysis of computerized cognitive training with older adults
(Kueider, Parisi, Gross, & Rebok, 2012) reported pre-post training
effect sizes of 0.197.14 for 8 different neuropsychological software
interventions (though only 4 of these used a game approach), and
0.091.70 for 8 different video game interventions. As with our
study, the majority of these reviewed studies reported that older
adults did not need to be technologically savvy in order to
successfully complete or benet from training. However, this
meta-analysis focused primarily on differing effects of specic
interventions without considering the often signicant
differences in the demographics and baseline characteristics of
the enrolled subjects as well as differences in such factors as the
duration, frequency or diversity of the intervention. Future work
might therefore consider targeting seniors at greatest risk of
cognitive decline and providing a motivational component that
pushes individuals to pursue both a minimum amount of time spent
on task as well as an increasing level of difculty as skill increases.
Conicts of interest
None of the authors has any conicts of interest to disclose
regarding the contents of this manuscript.
Acknowledgements
This project was supported by funding from the Pearl J. Aldrich
Endowment at Michigan State University. The authors would like
to thank John Leahy and Shay Raleigh for their hard work on data
collection, data cleaning and analysis. We would also like to thank
Dr. Paul Maruff for his comments and review of the manuscript,
and Yen Ying Lim who assisted with aspects of the statistical
analysis, both at University of Melbourne, Australia.
References
Ackerman, P. L., Kanfer, R., & Calderwood, C. (2010). Use it or lose it? Wii brain exercise
practice and reading for domain knowledge. Psychology and Aging, 25(4), 753766
http://dx.doi.org/10.1037/a0019277.
Basak, C., Boot, W. R., Voss, M. W., & Kramer, A. F. (2008). Can training in a real-time
strategy video game attenuate cognitive decline in older adults? Psychology and
Aging, 23(4), 765777 http://dx.doi.org/10.1037/a0013494.
Boot, W. R., & Blakely, D. P. (2011). Do action video games improve perception and
cognition? Frontiers in Cognition, 2, 226 http://dx.doi.org/10.3389/fpsyg.2011.00226.
Falleti, M. G., Maruff, P., Collie, A., & Darby, D. G. (2006). Practice effects associated with
the repeated assessment of cognitive function using the CogState battery at 10minute, one week and one month test-retest intervals. Journal of Clinical and
Experimental Neuropsychology, 28(7), 10951112 http://dx.doi.org/10.1080/
13803390500205718.
Fredrickson, J., Maruff, P., Woodward, M., Moore, L., Fredrickson, A., Sach, J., et al.
(2010). Evaluation of the usability of a brief computerized cognitive screening test
in older people for epidemiological studies. Neuroepidemiology, 34(2), 6575
http://dx.doi.org/10.1159/000264823.
Green, C. S., & Bavelier, D. (2003). Action video game modies visual selective attention.
Nature, 423(6939), 534537 http://dx.doi.org/10.1038/nature01647.
Heeter, C., & Winn, B. (2008). Implications of gender, player type and learning strategies
for the design of games for learning. Beyond Barbie to Mortal Combat: New
perspectives on games, gender, and computing, Cambridge, MA: MIT Press.
Jaeggi, S. M., Buschkuehl, M., Jonides, J., & Shah, P. (2011). Short- and long-term benets
of cognitive training. Proceedings of the National Academy of Sciences of the United
States of America, 108(25), 1008110086 http://dx.doi.org/10.1073/pnas.
1103228108.
Kueider, A. M., Parisi, J. M., Gross, A. L., & Rebok, G. W. (2012). Computerized cognitive
training with older adults: A systematic review (S. Brucki, Ed.). In Brucki, S. (Ed.),
PLoS ONE, 7(7), e40588 http://dx.doi.org/10.1371/journal.pone.0040588.
Langbaum, J. B. S., Rebok, G. W., Bandeen-Roche, K., & Carlson, M. C. (2009). Predicting
memory training response patterns: Results from ACTIVE. The Journals of Gerontology. Series B, Psychological Sciences and Social Sciences, 64(1), 1423 http://
dx.doi.org/10.1093/geronb/gbn026.
Lustig, C., Shah, P., Seidler, R., & Reuter-Lorenz, P. A. (2009). Aging, training, and the
brain: A review and future directions. Neuropsychology Review, 19(4), 504522
http://dx.doi.org/10.1007/s11065-009-9119-9.
Mahncke, H. W., Connor, B. B., Appelman, J., Ahsanuddin, O. N., Hardy, J. L., Wood, R. A.,
et al. (2006). Memory enhancement in healthy older adults using a brain plasticitybased training program: A randomized, controlled study. Proceedings of the