You are on page 1of 7

Archives of Gerontology and Geriatrics 57 (2013) 17

Contents lists available at SciVerse ScienceDirect

Archives of Gerontology and Geriatrics


journal homepage: www.elsevier.com/locate/archger

Effects of a computer-based cognitive exercise program on age-related


cognitive decline
Andrea Bozoki a,*, Mirjana Radovanovic b, Brian Winn c, Carrie Heeter c, James C. Anthony d
a

Departments of Neurology and Radiology, Michigan State University, United States


University Psychiatric Hospital, Zaloska 29, 1000 Ljubljana, Slovenia
c
Department of Telecommunication, Information Studies, and Media, Michigan State University, United States
d
Department of Epidemiology and Statistics, Michigan State University, United States
b

A R T I C L E I N F O

A B S T R A C T

Article history:
Received 6 September 2012
Received in revised form 23 February 2013
Accepted 28 February 2013
Available online 29 March 2013

We developed a senior friendly suite of online games for learning with interactive calibration for
increasing difculty, and evaluated the feasibility of a randomized clinical trial to test the hypothesis that
seniors aged 6080 can improve key aspects of cognitive ability with the aid of such games. Sixty
community-dwelling senior volunteers were randomized to either an online game suite designed to
train multiple cognitive abilities, or to a control arm with online activities that simulated the look and
feel of the games but with low level interactivity and no calibration of difculty. Study assessment
included measures of recruitment, retention and play-time. Cognitive change was measured with a
computerized assessment battery administered just before and within two weeks after completion of
the six-week intervention. Impediments to feasibility included: limited access to in-home high-speed
internet, large variations in the amount of time devoted to game play, and a reluctance to pursue more
challenging levels. Overall analysis was negative for assessed performance (transference effects) even
though subjects improved on the games themselves. Post hoc analyses suggest that some types of games
may have more value than others, but these effects would need to be replicated in a study designed for
that purpose. We conclude that a six-week, moderate-intensity computer game-based cognitive
intervention can be implemented with high-functioning seniors, but the effect size is relatively small.
Our ndings are consistent with Owen et al. (2010), but there are open questions about whether more
structured, longer duration or more intensive games for learning interventions might yield more
substantial cognitive improvement in seniors.
2013 Elsevier Ireland Ltd. All rights reserved.

Keywords:
Memory
Cognition
Computer-based
Cognitive training
Age-related cognitive decline

1. Introduction
Cognitive decline, especially memory loss, is a frequent
complaint among older individuals. The term senior moment
has entered colloquial speech to describe the appearance of alltoo-common transient amnestic episodes, and a frequent unspoken fear is that a persons mild age-related memory loss might
soon become a devastating and progressive condition such as
Alzheimers disease (AD). The reality is that most older adults
experience only mild age-related cognitive declines as they age;
only 1 in 9 will develop AD by age 75 (McDowell, 2001).
Nonetheless, it is a source of both consternation and stress for
older adults, and as a result, much attention has been focused on
strategies for preventing or ameliorating this decline.

* Corresponding author at: B-444 Clinical Center, 788 Service Road, East Lansing,
MI 48824, United States. Tel.: +1 517 884 2482.
E-mail address: Andrea.bozoki@ht.msu.edu (A. Bozoki).
0167-4943/$ see front matter 2013 Elsevier Ireland Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.archger.2013.02.009

A popular hypothesis is that computer-based brain exercises


might be used to help seniors preserve and possibly improve
cognitive performance during the years of life when functional
memory ability is declining, since most seniors retain a strong
capacity for new learning, and many are motivated to engage in
puzzles and games. Research results from the last decade have
been frequently positive, demonstrating durable effects (Willis
et al., 2006) not only on the specic task(s) being practiced but
transfer effects to related brain activities (Basak, Boot, Voss, &
Kramer, 2008; Green & Bavelier, 2003; Mahncke et al., 2006).
However, it is less clear whether the burgeoning industry of
computer-based cognitive training is effective for seniors, though
it has quickly become a multi-million-dollar online industry, with
claims of benecial effects on cognition.
The actual data on computer-game based training in seniors
are mixed, with some researchers reporting cognitive improvement (Smith et al., 2009) and some not (Papp, Walsh, & Snyder,
2009). Recently, a highly publicized paper by Owen et al. (2010),
found no compelling effects from two types of computer-game

A. Bozoki et al. / Archives of Gerontology and Geriatrics 57 (2013) 17

based training. Notably, it has the largest sample size ever


recruited for such a study; more than 11,000 young to middleaged individuals. However, individuals completed only 4 h of
game time during the six week interval of training (24 10-min
sessions), therefore, the gaming experience may not have been
intensive enough. It might also not have been sufciently varied,
with only six available exercises per group. Moreover, study
participants were in an age range where cognitive performance
remains relatively stable, without the increasing incidence rates
for amnestic episodes that are seen after age 60. By contrast, two
just-published studies (Nouchi et al., 2012; Van Muijden, Band,
& Hommel, 2012) have addressed several of the deciencies in
the Owen paradigm, albeit at the expense of much smaller
sample sizes. Interestingly, both of these studies did report
positive effects Nouchi et al. used the commercially available
program Brain Age and obtained meaningful transfer effects
from game play to measures of executive functions and
processing speed, while Van Muijden et al. found at least
limited effects on a battery of untrained executive processing
measures (2/9 measures improved in the active treatment
group). However, neither of these two studies included
measures of episodic memory, nor were they hypothesized to
benet that most vulnerable area of cognitive decline. Therefore,
unresolved questions remain about these interventions: most
fundamentally, whether they work (Jaeggi, Buschkuehl, Jonides,
& Shah, 2011; Lustig, Shah, Seidler, & Reuter-Lorenz, 2009). If
they do, are they applicable to seniors, and do some forms of
computer-based games for learning work better than others?
In designing our study, the rst aim was to explore facets of
feasibility, which included a question of whether it would be
possible to develop a reinforcing suite of online games for
learning that would have moderatestrong intensity and
interactivity and would engage seniors for repeated sessions
during a voluntary six week study regimen that would emulate
real-life usage variables, as well as the challenge in creating a
control condition of sufcient appeal and engagement to match
total playtime among users. This aim was judged to be primary,
and we took into account the evolving literature suggesting that
seniors strongly prefer casual, short games that focus on repetitive
game play rather than on story line or sophisticated graphics, and
express strong aversion for violent games and ones requiring
extensive hand controls. They like to see their progress/
improvement and beat prior high scores. (McKay & Maki, 2010;
Nap, Kort, & IJsselsteijn, 2009). Our games for learning were
designed to engage and reward performance (Heeter & Winn,
2008), as well as ramping up in difculty over time, to take
advantage of these preferences.
An additional aim, judged to be secondary in light of sample
size constraints and resulting statistical power, was to
estimate the effect size of the games to exercise and improve
cognitive function on different facets of cognitive performance,
especially those memory skills that are central to age-related
memory decline: episodic memory (the ability to retain and
recall new information after a delay), and working memory (the
ability to actively hold and manipulate multiple pieces of
information in mind simultaneously). We were particularly
interested in addressing some of the perceived aws of prior
work specically, to create an active control condition that
would address outstanding questions regarding the benets of
nonspecic computer use, and ensuring a minimum level of
participation among all subjects by use of a contingency reward
program. We also felt that multiple activities (a suite of games,
as in the Owen et al. study) would be more effective than a single
task or game for producing transfer effects, and accordingly,
created 4 games in our suite, each stressing a different aspect of
cognition.

2. Methods
2.1. Study population
This study was designed as a prospective, single-blind (subjectblinded but not investigator), controlled study. The study
population was recruited from among community-dwelling
seniors in the greater Lansing region, a suburban and metropolitan
region of approximately 250,000 which is home to a large
university campus. Recruitment was via advertisements in the
local newspaper, at area churches and synagogues, and at
community centers frequented by seniors. Inclusion criteria
included: age over 60, vision and hearing intact enough to see
and hear computer-based stimuli, and daily access to an internetenabled computer. All prospective subjects were telephonescreened and excluded if there were indicators of non-normal
baseline cognition (prescriptions for cholinesterase inhibitors or
memantine, diagnoses of major psychiatric disease or dementia) or
major intercurrent illness/disability that would prevent compliance with the proposed intervention. At the time of enrollment,
subjects were additionally screened with the SLUMS (St. Louis
University Mental Status exam) (Tariq, Tumosa, Chibnall, Perry, &
Morley, 2006) and excluded from the study if score was <22
(reective of dementia in a college-educated individual).
We recruited a total of 60 subjects: 32 for the active and 28 for
the control condition. Sample size was based on a power analysis
with effect sizes from three positive cognitive intervention trials
(Basak et al., 2008; Mahncke et al., 2006; Smith et al., 2009). Effect
sizes between d = 0.24 and 0.7 were calculated for these
interventions, and we assumed 80% power to detect a difference
using one-way ANOVA at alpha = 0.05.
2.2. Study protocol
The study protocol was reviewed and approved by the Michigan
State University institutional review board for protection of human
subjects in research. After signing informed consent and undergoing screening, subjects were randomized to either the active
intervention or a control condition. In contrast to the experimental
online game condition, the control condition involved passive
computer-based online viewing of three modalities of daily online
news content: text with pictures, audio news stories, and video
news stories. Subjects were not informed that there was a control
group; all subjects believed themselves to be using a program that
might improve cognition over time if used faithfully. All subjects
were debriefed at the time of post-testing, and all subjects
regardless of condition were allowed free use of the MBM games
for an additional 3 months after the trial ended, as a token of thanks
for their participation.
In order to create participant blinding, both groups were given
identical information regarding the purpose of the experiment
(nding out what the effects of regular computer-based mental
activity are on thinking ability). Nonetheless, informed consent
materials did not mention the nature of the computer activities.
Both written and electronic materials included with the game
suite/control program encouraged participants to aim for 1 h of
program use on most days of the week (57).
Contingency management of online activities was put into
place for both the experimental and the control arm. Participants
knew that they would receive $5 for each week that they logged in
to the program at least 5 times for at least 30 min, and an
additional $30 at the end of the study if total log-in time exceeded
1600 min. Participant activity time was monitored weekly, and a
trouble-shooting/encouraging phone call was initiated by a study
staff member if less than 1 h of total activity time was logged for
that week.

A. Bozoki et al. / Archives of Gerontology and Geriatrics 57 (2013) 17

2.3. Active intervention


We developed an online suite of games for learning as part of
this feasibility study. Entitled My Better Mind, the program
operates through an auto-download-enabled Flash Player interface
(Adobe). The active program consists of 4 games, each one
designed to target a different cognitive domain. (1) Photoaw: the
object of the game is to nd tiny differences between seemingly
identical pairs of photographs of scenes. It is designed to engage
visual attention, visual working memory, since individuals have to
keep the original image in mind as they look for differences in the
2nd image and visual-spatial relationships. It also emphasizes
speeded processing (the game is timed). (2) Headline Clues is
similar to a crossword puzzle but focused on current events;
players solve headlines with missing words and letters, about news
of the day. It is designed to engage verbal memory and reasoning,
and also emphasizes speeded processing. (3) Sokoban is a logic
puzzle game that exercises strategic planning and uses both
reasoning and visual-spatial skills. The object of the game is to
push lenses through a maze and onto targets in the minimum
number of moves. It is designed to engage spatial executive
processing and non-verbal reasoning. (4) Keep It In Mind is a gamelike task where the goal is to remember a sequence of progressively
longer lists of items. Players are rst shown two items and then
asked to pick out those items from a superset of similar items. Each
round progresses from remembering 2, then 3, then 4 on up to 7
items. Keep It In Mind players choose the item type they want to
work on (numbers, letters, words, patterns, objects as well as a
difculty setting). It was developed to strongly engage working
memory (both visuospatial and verbal).
Each program is designed with the following capabilities: (1)
each game begins with an introductory level, aimed at familiarizing those who have never played online games before. Following
that, each game has three user-selectable levels of difculty (easy,
moderate, and hard) to provide challenge as individuals gain skill
with the task. (2) Performance data are collected at both the
individual game level, and across practice sessions, to document
self-selected challenge level and achievement and to provide user
performance feedback. (3) The program keeps track of total time
spent per session, as well as number of sessions, to provide data
about compliance. This information is also available to participants, so they can self-monitor.
2.4. Control condition
We designed the control program, also labeled My Better
Mind, to emulate the look and feel of the active games. It
included three activity selections with titles similar to the active
games (Thoughts In Motion (online video news clips of the day),
Sound Thinking (audio news clips of the day), and Headline Clues
(textual news stories of the day, with pictures)). We made the
control condition accessible via the same auto-download mechanism used in the online games condition. This made it possible to
gather information regarding usage (time spent logged in). The
control program delivers a pre-selected mix of informative articles
culled from websites and audio- and video-based instructional and
educational information. Login and the process of activity selection
is identical to the active game, but the activities are not at all
interactive nor individually calibrated increasing difculty; all that
is required is passive viewing of the online information.
2.5. Outcome measures
In addition to feasibility, we estimated the effect of the online
game experience using a computerized battery of selected
neuropsychometric tests known as CogStateTM, administered to

the senior participants before, and again within 2 weeks after the
six-week intervention period. The CogState assessment consists of
adaptations of standard (paper-based) tests across a range of
cognitive functions including psychomotor speed, attention, decision-making, working memory and new learning. It has age- and
education-adjusted norms up through age 85, and has been well
validated in multiple trials (Fredrickson et al., 2010; Maruff, Falleti,
Collie, Darby, & McStephen, 2005; Weaver Cargin, Maruff, Collie, &
Masters, 2006). Additional advantages over paper-based psychometrics include ease of administration, brief duration (less than
25 min), and repeatability (Falleti, Maruff, Collie, & Darby, 2006), a
major concern in studies such as this, where practice effects over
relatively short intervals can articially improve performance. The
CogState battery used in this study had nine component sub-tests:
MON Monitoring (simple reaction time), IDN Identication
(visual-motor reaction time), CPAL Continuous Paired Association
Learning (visuospatial memory), CPAR Continuous Paired Association; Delayed Recall, OCL One Card Learning, OWL One Word
Learning, OWLR One Word learning; Delayed Recall, ONB Oneback (working memory) and PRD Card Prediction (strategic
problem solving).
2.6. Statistical analyses
For each subject, for each task where RT was the primary
outcome measure distributions of RTs were normalized using a
logarithmic base transformation before the mean RT was
computed. This transformation was applied because distributions
of RTs typically show negative skew especially if a participant
generates one or two very slow responses. Rather than lose data or
have to develop criteria for outlier scores in individual distributions, log10 transformation allows all RTs to remain in the
distribution while minimizing the effect of slow RTs on estimates
of the mean and variance.
For tasks were performance accuracy was the main outcome
measure (i.e. CPAL) normalization was done using an arcsine
transformation. Arcsine transformations promote normal distributions in proportional data by extending the possible values as
the top end of the data range. NB: For data reporting in Table 3, the
group mean performance measures of RT and accuracy scores were
back-transformed in order to allow readers to appreciate the study
effects in the original data units.
We did not evaluate both RT and accuracy for every variable. As
with prior studies utilizing CogState, we considered RT as the
primary outcome measure for those tests which primarily measure
speed-of-processing (IDN, MON, ONB) and were therefore at/near
ceiling on accuracy, and used accuracy for the remaining tests
(OWL, OWLR, PRD, OCL, CPAL and CPAR).
For all assessed variables in our CogState battery, we
implemented a standard three step analytical procedure: explore
(e.g. descriptive statistics), analyze/estimate (e.g. modeling) and
explore (e.g. analysis of residuals). After performing an initial
descriptive exploration via Tukey type plots and cross-tabulations
(not presented here), the analysis was performed in different ways.
First, we used a two sample t-test to compare means of differences
(result being dened as a difference (delta) between time 1 (t1)
and time 0 (t0), where delta = t1 t0 for test time and test
accuracy) on test results between the two groups. Second, we
performed an ANOVA, separately for each CogState variable,
wherein each subjects baseline score was added as a covariate,
rst alone and then with informative co-variates added. Informative covariates included in regression models were group
assignment (active/control), sex, SLUMS score (in points), education level (dened as years of completed education), employment
status (retired, part time employed, full time employed) and total
time spent on intervention games (in minutes). The approach

A. Bozoki et al. / Archives of Gerontology and Geriatrics 57 (2013) 17

4
Table 1
Subject demographics.
N

Active
Control
Total

32
28
60

Sex

Age

Employment

% Male

Mean

SD

Retired

Part-time

Full-time

1315

16

17

1821

22+

Mean

SD

50.0%
32.1%
41.7%

67.2
70.8
68.9

6.37
6.81
6.76

65.6%
81.5%
72.9%

15.6%
7.4%
11.9%

18.8%
11.1%
15.3%

31.3%
17.9%
25.0%

18.8%
25.0%
21.7%

12.5%
14.3%
13.3%

12.5%
28.6%
20.0%

25.0%
14.3%
20.0%

27.5
27.0
27.2

2.20
2.12
2.16

allowed for tting of a more complete covariate-adjusted model to


estimate individual associations in the context of hypothesized
inuential covariates (e.g., educational attainment).
The criterion for statistical signicance for all comparisons or
correlations was set to p < 0.01. This was done to balance the risk
of false positive ndings against identication of important
relationships when (a) performance on the neuropsychological
outcome measures was likely to be highly correlated, especially for
tests that assess the same cognitive domain, (b) this is an
exploratory investigation in a relatively new area of neuropsychology in which an important clinical issue has been identied,
and (c) measures of effect size were used to guide interpretation
about the meaningfulness of results. Specically, Type I errors
were suspected where comparisons were statistically signicant at
the corrected level but where effect sizes were very small (e.g.,
d < 0.20). Analyses were executed in Stata 10 (StataCorp, 2008)
and in SPSS 21 (IBM, 2011).
3. Results
3.1. Feasibility
Despite advertising for individuals with daily access to an
internet-enabled computer, it was necessary to screen 86
individuals in order to nd 60 who had true ad libitum access
to a sufciently modern computer system with a high-speed (nondialup) internet connection, judged to be necessary for study
participation. There were no statistically signicant differences in
drop-out rates between groups (three out of 32 in the active online
game condition and seven out of 28 in the control condition
dropped out prior to study completion; Fishers exact test
p = 0.306) or in the frequency and duration of log-ins: (active
condition, n = 27; mean time = 1300 min vs. control condition,
n = 23; mean time = 1000 min; t = 1.4; p = 0.16).
3.2. Demographics
Overall demographic characteristics of our sample are fairly
typical of self-selected groups recruited in the setting of a
university town: educational attainment of 16.7 years (active
group) and 16.9 years (control group) with a mean age of 67.8
(active group) and 70.2 (control group) reect a well-educated
young-old group, more likely to have access to, and familiarity
with the internet. A majority were retired after prior work for pay.
Approximately one-fourth of the total group was still working at
least part-time. Mean score on the SLUMS was 27, reecting
exclusion of individuals with scores below 22. There were no dropout subjects with a SLUMS score < 25; mean SLUMS score was not

Education (years)

SLUMS score

different between completers and drop-outs, p > 0.05. There were


no statistically signicant differences between the experimental
and control groups in terms of sex, SLUMS score, education or
employment (Table 1).
3.3. Intervention
Descriptive analysis of time spent on gaming showed no robust
difference between active and control groups (t = 1.4; p = 0.16).
However, it appears that the active group had at least a tendency to
play for longer total period than the control group (1307  644 S.D.
vs. 1002  877 min S.D., though this difference was not statistically
signicant) and in the active group, at least a trend to less time spent
playing the working memory game (Keep it in Mind; 155  117 min
S.D.) than the other three games (440  412 min S.D.; 324  292 min
S.D.; 344  303 min S.D.). There was also large variation in time spent
playing this game, ranging from a low of 16 rounds to a high of 2837
rounds. 14 active subjects logged in on only 3 or 4 days; 6 subjects
logged in on 30 or more days. 59.5% devoted less that 5% of all gaming
time to the memory game. 48.8% played Keep It In Mind on one fourth
or fewer of the days they played any game (Table 2).
By using the computer to keep track of individual game times
within the active online game condition, we discovered an
unexpected imbalance across the available games. Specically,
the Keep It In Mind (KIIM) working memory game was not
played as frequently as the other games when the mean log-in time
statistics were compared, although there was a very large
between-individual variability: 7 subjects logged in over 30 days
of the six week interval while 15 others played just 34 days out of
the entire six week interval. There were no such imbalances in
relation to the other games in the suite. In contrast to the other
three games, KIIM was more of an overt memory exercise than a
game. It was not necessarily harder to play, unless players wanted
it to be hard. Players had complete control over difculty level as
well as how many items they tried to remember.
Based on paired t-tests among the 27 participants who played
headline clues for at least 2 weeks, the difference in average speed
between their rst and last week of play (115 s in the rst week
versus 73 s in the nal week) was signicantly faster in the last
week (t = 7.783, df = 26, p < 0.001). The difference in average
headlines solved between their rst and last week of play (90%
solved in the rst week versus 98% solved in the nal) approached
but did not achieve signicance (t = 1.937, df = 26, p = 0.06). This
was likely because many subjects hit a ceiling of 100% solved.
Based on paired t-tests among the 30 participants who played
Keep It in Mind for at least 2 weeks, the difference in average
solving speed between the rst and last week of play (19.7 s in the
rst week versus 21.8 s in the nal week) was not signicantly

Table 2
Total activity time (min) per activity.
Keep it in Mind

Photoaw

Head. Clues (A)

Sokoban

Active

Mean
SD

160.6
115.3

455.3
410.6

335.4
290.5

355.8
301.2

Thghts. Motion

Sound Think.

Head. Clues (C)

Total
1307.2

Control

Mean
SD

237.2
259.6

368.7
366.3

396.3
383.3

1002.3

A. Bozoki et al. / Archives of Gerontology and Geriatrics 57 (2013) 17

different. The difference in average memory challenges solved


between the rst and last week of play (71% solved in the rst week
versus 76% solved in the nal) was signicantly better in the nal
week (t = 2.413, df = 29, p = 0.02).
Based on paired t-tests among the 28 participants who played
Sokoban for at least 2 weeks, the difference in average speed
between the rst and last week of play (274.6 s in the rst week
versus 285.4 s in the nal week) was not signicantly different
(t = 0.372, df = 29, p = 0.71). The difference in average headlines
solved between their rst and last week of play (87% solved in the
rst week versus 72% solved in the nal) was signicantly worse in
the nal week (t = 3.474, df = 26, p < 0.001). Note that Sokoban
puzzles get harder the more you play, so they faced much harder
challenges the longer they played. Many subjects were successfully
solving puzzles in the nal week that were 3 or more levels (out of
6) above where they started.
3.4. CogState testing
Our initial descriptive step of data exploration revealed
differences in means between active and control groups roughly
ten-times smaller than the effects reported in the studies we used
for our power analysis. t-Tests comparing a pre-post test delta
between groups yielded no signicant differences in means on any
of our assessed variables. When examining individual CogState
tests with our ANCOVA approach, we again failed to identify any
signicant differences. These results were not altered by inclusion
of covariates including sex, age, educational attainment, and
SLUMS score. When total log-in time was regressed against
individual performance across our 9 variables, there was a
signicant main effect for time for a single variable, IDN RT, and
there was no interaction effect by control vs. active group.
Because many subjects in the active group did not play all 4
games equally (in some cases, at all), we undertook a secondary
analysis examining effects of individual game play-time on
CogState performance based on regression with pre-post test
change in score (in the active game group only, restricted to our
variables of interest as previously described) and found signicant
Table 3
CogState test scores.
CogState test

IDN rt 1
IDN rt 2
MON rt 1
MON rt 2
ONB rt 1
ONB rt 2
CPAL acc 1
CPAL acc 2
CPAR acc 1
CPAR acc 2
OCL acc 1
OCL acc 2
OWLR acc 1
OWLR acc 2
OWL acc 1
OWL acc 2
PRD acc 1
PRD acc 2

Active

Control

Mean

SD

Mean

SD

606
588
449
426
889
888
0.517
0.597
0.575
0.623
0.698
0.697
0.921
0.912
0.957
0.952
0.780
0.782

99
87
113
96
163
144
0.157
0.213
0.270
0.284
0.073
0.091
0.096
0.102
0.043
0.040
0.065
0.081

609
639
487
482
954
904
0.517
0.572
0.627
0.647
0.700
0.714
0.934
0.916
0.932
0.941
0.786
0.792

97
153
86
94
271
214
0.169
0.191
0.207
0.281
0.121
0.123
0.072
0.060
0.068
0.042
0.088
0.083

1 pre-test; 2 post-test; rt reaction time; acc accuracy; MON Monitoring


(simple reaction time), IDN Identication (visual-motor reaction time), CPAL
Continuous Paired Association Learning (visuospatial memory), CPAR Continuous
Paired Association; Delayed Recall, OCL One Card Learning, OWL One Word
Learning, OWLR One Word learning; Delayed Recall, ONB One-back (working
memory) and PRD Card Prediction (strategic problem solving). All rt are in
milliseconds; all accuracies are a ratio of number of items correct:total presented
items.

effects for both KIIM and Photoaw. For the working memory game
KIIM, we found robust effects on OWLR and PRD accuracy
(F = 4.816, p = 0.04 and F = 4.767 and p = 0.04). Similarly, regression of play time for the visual-spatial processing game Photoaw
resulted in signicant effects on IDN response time (F = 9.253,
p = 0.01). No other signicant effects were seen for individual subtest reaction-time or accuracy (Table 3).
4. Discussion
4.1. Feasibility
There are several facets of feasibility that deserve mention in
an initial overview of the study ndings. First, it is clearly possible
to design a suite of computer-based, cognitively stimulating
activities that seniors are willing to play for extended periods of
time: on average, subjects in our active group spent an average of
1300 min (>21 h) over 6 weeks. While our contingency payments
may have helped motivate some, the token nature of the sums ($5/
week, plus $30 for completion of the entire 6-week program)
makes it unlikely that money was a signicant factor in
performance. Rather, we believe that the reinforcing nature of
the computer activities themselves, combined with the senior
friendly interface (tested on several groups of seniors in focus
groups prior to implementation) was the critical factor. In
addition, we provided a weekly follow-up phone call to those
individuals who were not logging in for a minimum time each
week. These phone calls appeared to play a tangible role by
reminding some individuals of the availability of the program,
thereby helping them to make time for this task.
Second is the development of an appropriately engaging control
condition. None of the subjects who were randomized to our
control condition were aware that they did not use the active
program (as determined during the de-brieng session); they were
truly blinded, thus unlikely to drop out simply because they
suspected being in the control group. Accordingly, with respect to
creation of a suitable control condition, we succeeded in keeping
total log-in time equivalent between our groups. Nonetheless, we
had a drop-out rate of 10% for the active group but nearly 25% in the
less interesting control condition, and a total play time difference
of about 25%, both of which, while not statistically signicant in the
traditional sense, were substantial enough to be considered a
confound in our data analysis. They also serve as a caution that
larger studies of this kind will need to specically address this
issue, either by creating asymmetric contingencies between
groups (such that external rewards are greater for control
participants) or by being more prescriptive in the requirements
for game participation.
Another issue worth mentioning is the idea of differential
placebo effects based on the expectation of benet to the type of
task being evaluated. This is well-explained in a review by Boot and
Blakely (2011) and is essentially the notion that if a placebo
condition focuses on, say, visual learning, then subjects may do
better on subsequent visual memory tests simply because they
believe they should (face validity), as opposed to, say an auditory
memory test that is very different in structure from the placebo
task. We explicitly addressed this issue in our study by creating 3
different placebo control exercises, each mimicking aspects of one
of the active games.
4.2. Cognitive effects
Learning effects on the My Better Mind suite of games were
robust but not extreme. Because of the naturalistic set-up of our
paradigm, in which individuals had complete control over how
much they played of each game, and because several of the

A. Bozoki et al. / Archives of Gerontology and Geriatrics 57 (2013) 17

games became progressively harder the longer subjects played


(for KIIM, subjects were encouraged to seek harder levels of
game play over time), some measures of improvement in
performance were not signicant. However, in at least one of
these instances (Sokoban), the results are confounded by
increasing difculty level over time, and in another (Headline
Clues), by a ceiling effect despite some modest difculty
increases built into the program.
With respect to the hypothesized inuence of the My Better
Mind games on facets of cognitive performance, our ndings were
somewhat disappointing. This study was underpowered for
detecting a roughly ten-times-smaller difference in mean test
results as compared to prior published literature (our effect sizes
ranged from d = 0.02 to 0.12), and is comparable to the ndings of
Owen et al., who noted similar small improvements in both the
active and control groups in their study. Our ndings might be
partially explained by the relatively highly educated and
computer-savvy study population included in this research, which
could have exerted a ceiling effect on expected improvement after
the intervention at least on the one-word learning and recall tests,
in which accuracies were in the range of 9196%.
However, our post hoc analysis suggests, in keeping with
current evolving research, that some types of training are more
effective than others (Jaeggi et al., 2011; Langbaum, Rebok,
Bandeen-Roche, & Carlson, 2009). Although we did not test for
interaction effects of between the different games in our suite, we
found main effects of two of the four games may have been more
effective in promoting cognitive gains. KIIM improved accuracy
on delayed verbal recall and the ability to predict the next item
when following a sequence as well as a trend toward improvement in identication of an item. Photoaw, a game that on the
surface primarily promotes aspects of visual recognition and
visuospatial processing showed a trend toward improvement on
the ONB. We suspect this may be related to the need to keep the
original image in mind while searching the modied one (in effect,
a one-back task). Photoaw also produced a robust effect on
response time for the item identication task, less surprising
given the nature of the game.
In addition, with respect to working memory effects, we note
that many of the actively gaming seniors did not allocate much
time to KIIM, and/or replayed the same easier levels repeatedly
without advancing to higher degrees of challenge. As such, it
could be that the absence of stronger working memory effects
are due to our failure to engage seniors in this specic game,
which has important implications for future experiments of this
type.
The complete freedom of study participants to choose which
games they played, and more importantly, to set their preferred
difculty level, is a potentially serious impediment to cognitive
benets. In a series of game sub-analyses focused on Keep It In
Mind, Heeter and Winn reported large differences within the
groups regarding time spent and which activities individual
participants frequented (Heeter & Winn, 2008). The results reveal
a natural tendency on the part of a subset of participants to choose
modest challenges and to focus on one or a few brain domains that
are easiest for them. This group was least likely to receive the
cognitive benets that motivated them to play in the rst place. A
moderate level of challenge, not too easy but not impossibly hard,
is likely to be an optimal prescription for cognitive gain. Yet,
gaming for entertainment implies voluntary, player-controlled
challenge. Of our four games, only one (Sokoban) became
progressively more difcult within a chosen difculty level over
time. The other three games had a consistent level of challenge
that did not vary the longer someone played. Future research
should examine the role of self-challenge in cognitive gain from
cognitive games.

4.3. Limitations
The My Better Mind suite of cognitive games included only four
tasks. While these were developed to address 4 different types of
cognitive abilities, they are by no means exhaustive. Our sample of
convenience recruited a relatively young-old, healthy, highly
educated and computer-savvy study population, which would all
work to exert a ceiling effect on expected improvement after the
intervention. In addition, a more prescriptive approach to playing
the games, including how much of which games to play, would
have produced a more consistent active-group intervention (albeit
a less naturalistic one). Our small sample size precluded analyses
looking for effects of one game on another (interaction effects), and
therefore conclusions related to the effects of individual games on
aspects of cognitive improvement may be premature. Finally, the
games as currently designed permit the player to constantly replay
the easiest rounds; a greater degree of forced movement to more
difcult challenges might produce more robust effects. We feel
that these insights are important to the design and implementation of next-generation, more effective computer-game-based
cognitive interventions.
5. Conclusions
It remains controversial whether and to what extent cognitive
decline due to aging can be improved (Lustig et al., 2009; Morrison &
Chein, 2011). Much prior work has focused on the use of brain
exercises, which appeal at both a theoretical level due to an
understanding of brain plasticity (Mahncke et al., 2006), and at an
intuitive level focusing on a use it or lose it hypothesis. However, it
is becoming apparent that simply putting elders in front of a
computer screen (or assigning them Sudoku puzzles and crosswords) and assuming that any form of computer-based interaction
will have benets is clearly false. There appear to be dose-response
effects, challenge-level (difculty) effects, and task effects that
interactively contribute to the overall results. In addition, benets
appear to be modest in relatively high-functioning seniors,
suggesting a ceiling effect to the use of such interventions (e.g.
Ackerman, Kanfer, & Calderwood, 2010). A very recent, thorough
meta-analysis of computerized cognitive training with older adults
(Kueider, Parisi, Gross, & Rebok, 2012) reported pre-post training
effect sizes of 0.197.14 for 8 different neuropsychological software
interventions (though only 4 of these used a game approach), and
0.091.70 for 8 different video game interventions. As with our
study, the majority of these reviewed studies reported that older
adults did not need to be technologically savvy in order to
successfully complete or benet from training. However, this
meta-analysis focused primarily on differing effects of specic
interventions without considering the often signicant
differences in the demographics and baseline characteristics of
the enrolled subjects as well as differences in such factors as the
duration, frequency or diversity of the intervention. Future work
might therefore consider targeting seniors at greatest risk of
cognitive decline and providing a motivational component that
pushes individuals to pursue both a minimum amount of time spent
on task as well as an increasing level of difculty as skill increases.
Conicts of interest
None of the authors has any conicts of interest to disclose
regarding the contents of this manuscript.
Acknowledgements
This project was supported by funding from the Pearl J. Aldrich
Endowment at Michigan State University. The authors would like

A. Bozoki et al. / Archives of Gerontology and Geriatrics 57 (2013) 17

to thank John Leahy and Shay Raleigh for their hard work on data
collection, data cleaning and analysis. We would also like to thank
Dr. Paul Maruff for his comments and review of the manuscript,
and Yen Ying Lim who assisted with aspects of the statistical
analysis, both at University of Melbourne, Australia.
References
Ackerman, P. L., Kanfer, R., & Calderwood, C. (2010). Use it or lose it? Wii brain exercise
practice and reading for domain knowledge. Psychology and Aging, 25(4), 753766
http://dx.doi.org/10.1037/a0019277.
Basak, C., Boot, W. R., Voss, M. W., & Kramer, A. F. (2008). Can training in a real-time
strategy video game attenuate cognitive decline in older adults? Psychology and
Aging, 23(4), 765777 http://dx.doi.org/10.1037/a0013494.
Boot, W. R., & Blakely, D. P. (2011). Do action video games improve perception and
cognition? Frontiers in Cognition, 2, 226 http://dx.doi.org/10.3389/fpsyg.2011.00226.
Falleti, M. G., Maruff, P., Collie, A., & Darby, D. G. (2006). Practice effects associated with
the repeated assessment of cognitive function using the CogState battery at 10minute, one week and one month test-retest intervals. Journal of Clinical and
Experimental Neuropsychology, 28(7), 10951112 http://dx.doi.org/10.1080/
13803390500205718.
Fredrickson, J., Maruff, P., Woodward, M., Moore, L., Fredrickson, A., Sach, J., et al.
(2010). Evaluation of the usability of a brief computerized cognitive screening test
in older people for epidemiological studies. Neuroepidemiology, 34(2), 6575
http://dx.doi.org/10.1159/000264823.
Green, C. S., & Bavelier, D. (2003). Action video game modies visual selective attention.
Nature, 423(6939), 534537 http://dx.doi.org/10.1038/nature01647.
Heeter, C., & Winn, B. (2008). Implications of gender, player type and learning strategies
for the design of games for learning. Beyond Barbie to Mortal Combat: New
perspectives on games, gender, and computing, Cambridge, MA: MIT Press.
Jaeggi, S. M., Buschkuehl, M., Jonides, J., & Shah, P. (2011). Short- and long-term benets
of cognitive training. Proceedings of the National Academy of Sciences of the United
States of America, 108(25), 1008110086 http://dx.doi.org/10.1073/pnas.
1103228108.
Kueider, A. M., Parisi, J. M., Gross, A. L., & Rebok, G. W. (2012). Computerized cognitive
training with older adults: A systematic review (S. Brucki, Ed.). In Brucki, S. (Ed.),
PLoS ONE, 7(7), e40588 http://dx.doi.org/10.1371/journal.pone.0040588.
Langbaum, J. B. S., Rebok, G. W., Bandeen-Roche, K., & Carlson, M. C. (2009). Predicting
memory training response patterns: Results from ACTIVE. The Journals of Gerontology. Series B, Psychological Sciences and Social Sciences, 64(1), 1423 http://
dx.doi.org/10.1093/geronb/gbn026.
Lustig, C., Shah, P., Seidler, R., & Reuter-Lorenz, P. A. (2009). Aging, training, and the
brain: A review and future directions. Neuropsychology Review, 19(4), 504522
http://dx.doi.org/10.1007/s11065-009-9119-9.
Mahncke, H. W., Connor, B. B., Appelman, J., Ahsanuddin, O. N., Hardy, J. L., Wood, R. A.,
et al. (2006). Memory enhancement in healthy older adults using a brain plasticitybased training program: A randomized, controlled study. Proceedings of the

National Academy of Sciences of the United States of America, 103(33), 12523


12528 http://dx.doi.org/10.1073/pnas.0605194103.
Maruff, P., Falleti, M. G., Collie, A., Darby, D., & McStephen, M. (2005). Fatigue-related
impairment in the speed, accuracy and variability of psychomotor performance:
Comparison with blood alcohol levels. Journal of Sleep Research, 14(1), 2127 http://
dx.doi.org/10.1111/j.1365-2869.2004.00438.x.
McDowell, I. (2001). Alzheimers disease: Insights from epidemiology. Aging (Milan,
Italy), 13(3), 143162.
McKay, S. M., & Maki, B. E. (2010). Attitudes of older adults toward shooter video
games: An initial study to select an acceptable game for training visual processing.
Gerontechnology, 9(1), 517 http://dx.doi.org/10.4017/gt.2010.09.01.001.00.
Morrison, A. B., & Chein, J. M. (2011). Does working memory training work? The
promise and challenges of enhancing cognition by training working memory.
Psychonomic Bulletin & Review, 18(1), 4660 http://dx.doi.org/10.3758/s13423010-0034-0.
Nap, H. H., Kort, Y.A.W.de, & IJsselsteijn, W. A. (2009). Senior gamers: Preferences,
motivations and needs. Gerontechnology, 8(4), 247262 http://dx.doi.org/10.4017/
gt.2009.08.04.003.00.
Nouchi, R., Taki, Y., Takeuchi, H., Hashizume, H., Akitsuki, Y., Shigemune, Y., et al.
(2012). Brain training game improves executive functions and processing speed in
the elderly: A randomized controlled trial. PLoS ONE, 7(1), e29676 http://
dx.doi.org/10.1371/journal.pone.0029676.
Owen, A. M., Hampshire, A., Grahn, J. A., Stenton, R., Dajani, S., Burns, A. S., et al. (2010).
Putting brain training to the test. Nature, 465(7299), 775778 http://dx.doi.org/
10.1038/nature09042.
Papp, K. V., Walsh, S. J., & Snyder, P. J. (2009). Immediate and delayed effects of
cognitive interventions in healthy elderly: A review of current literature and future
directions. Alzheimers & Dementia, 5(1), 5060 http://dx.doi.org/10.1016/
j.jalz.2008.10.008.
Smith, G. E., Housen, P., Yaffe, K., Ruff, R., Kennison, R. F., Mahncke, H. W., et al. (2009). A
cognitive training program based on principles of brain plasticity: Results from the
Improvement in Memory with Plasticity-based Adaptive Cognitive Training (IMPACT) study. Journal of the American Geriatrics Society, 57(4), 594603 http://
dx.doi.org/10.1111/j.1532-5415.2008.02167.x.
Tariq, S. H., Tumosa, N., Chibnall, J. T., Perry, M. H., 3rd, & Morley, J. E. (2006).
Comparison of the Saint Louis University mental status examination and the
mini-mental state examination for detecting dementia and mild neurocognitive
disordera pilot study. The American Journal of Geriatric Psychiatry, 14(11), 900
910 http://dx.doi.org/10.1097/01.JGP. 0000221510.33817.86.
Van Muijden, J., Band, G. P. H., & Hommel, B. (2012). Online games training aging
brains: Limited transfer to cognitive control functions. Frontiers in Human Neuroscience, 6, 221 http://dx.doi.org/10.3389/fnhum.2012.00221.
Weaver Cargin, J., Maruff, P., Collie, A., & Masters, C. (2006). Mild memory impairment
in healthy older adults is distinct from normal aging. Brain and Cognition, 60(2),
146155 http://dx.doi.org/10.1016/j.bandc.2005.10.004.
Willis, S. L., Tennstedt, S. L., Marsiske, M., Ball, K., Elias, J., Koepke, K. M., et al. (2006).
Long-term effects of cognitive training on everyday functional outcomes in older
adults. The Journal of the American Medical Association, 296(23), 28052814 http://
dx.doi.org/10.1001/jama.296.23.2805.

You might also like