Professional Documents
Culture Documents
Management
Studies Journal
Survey evidence from the UK and the USA suggests that over 90 per cent
of firms engage in some forms of development activities for managers (Constable,
1988; Loo, 1991; Saari et al., 1988). In 2000, U.S. organizations with 100 or more
employees budgeted to spend $54 billion on formal training (Staff, 2000). These
organizations are understandably
being put to good use in the form of highly effective training programs.
Many organizations
effectiveness
in terms of
1997). According to
Atkinson et al. (1997), some common reasons for using financial tools for measuring
performance
schema
is that the various participants or stakeholders must perceive value in the exchange.
He asserts that inducements must be seen as having at least equal or greater value
than contributions. From the stakeholders' perspective, what they receive is of equal
or greater
If that relationship
expected of
relationships
in contractualist
contributions
as well as
theory of
TRAINING
PROCESS
USING A STAKEHOLDER
step. We also refer to the literature that presents the stakeholder model as a solution
to solve these problems. Finally, we discuss some examples of who the stakeholders
are and what are their contributions and expectations in the training process.
When this set of contributions and expectations is viewed in contractual
terms where the various parties are held accountable for providing their share of
contributions by those who expect them (Heugens & Oosterhout, 2002), it becomes
clear how this model can be used for solving the problems of accountability and
management control direction listed earlier.
Courpasson and Livian (1993) give a useful case study of a large bank in
France where one of the largest training programs received very good trainee
reactions
and satisfaction
organizational
scores. However,
the program
failed to achieve
its
training and how that will translate into the expected results. Clearly, in this case,
the requirements
of a key stakeholder,
the organization,
was overlooked
in an
and
expectations in the TNA phase of the training process. This list is a generic example
and each organization
contributions
and expectations.
drive accountability
and
Table 1
Examples of Stakeholders' Contributions and Expectations in the TNA Phase of the
Training Process
Expectation
Contribution
Top Management
- Clearly define the mission and vision of
the company.
objectives.
- An estimate of the resources required is
made available.
- Deliverymanagersto incorporatecapacity
predictions in the training plans.
descriptions,
and
performance
benchmarks to help identify person and
Line Managers
- Contribute to "person level" TNA by
identifying who needs what kind of
training.
- Advance in career.
- Improves skills.
appraisals.
- Provides feedback on existing training
programs they have attended in the past.
Training
Department
3.
4.
5.
6.
examination
(1997a)
of the links between various job and personal factors, on the one
hand, and the training process, on the other, in order to determine which employees
should attend a specific kind of training and how the organization
will be more
1997a). Petridou
of the appropriate
in the context
theory
and implications
1. Adults differ widely in their learning styles and capacities. Most groups will
be mixed and benefit from a variety of visual, auditory, interactive, and selfdirected methodologies.
2 Adults want to learn practical information. Unlike children who are subject
3.
focused learners, adults are problem focused and seek to learn what can
be employed to remedy a situation.
Adults have experience to draw upon. There is much wisdom within every
mix of training participants. Involving participants in their own learning and
using the group's collective intelligence is to the facilitator's and everyone's
advantage.
4.
5.
According to Ford & Wroten (1984), a useful model for describing the
systematic development and interrelated components of a training program is based
on an instructional systems perspective (Goldstein,
state that this training model indicates that there should be a logical flow from the
initial determination
evaluation.
and to training
reassess
training needs for possible program redesign. A key characteristic of the instructional
system approach is the emphasis on the continuous use of evaluative feedback to
modify the existing training program. In other words, training is seen as an evolving
process which utilizes evaluative information to adapt the program so that it better
meets its stated objectives (Goldstein and Buxton, 1982). Ford and Wroten (1984)
also bring out the importance of content evaluation as an aspect of effective training
design and delivery.
Ford and Wroten state that the existing training literature fails to provide
adequate strategies for evaluating existing training programs in terms of the program's
content and its job relatedness. The training literature also fails to provide methods
for linking training evaluation to training needs reassessment and program redesign.
Arthur et ai. (2003) assert that another important factor is the methodology
used to deliver training programs. Wexley and Latham (2002) highlighted the need
to consider skill and task characteristics in determining the most effective training
method. However, there has been very little, if any, primary research directly assessing
these effects (Arthur et aI., 2003).
It is clear from the brief discussions above that several key stakeholders are
involved in this stage of the training process as well. For example, trainees, trainers,
subject matter experts, content developers, instructional designers, and line managers.
The contributions and expectations of some of these stakeholders, such as
trainees and trainers, are commonly acknowledged. Those of others like instructional
designers and content developers are not that well understood as explained by Ford
and Wroten (1984).
Courpasson and Livian (1993) present a case study from a large bank in
France where large and expensive training programs failed to meet their objectives
because they were not designed and implemented with the nature and expectations
of their stakeholders clearly articulated. They were designed to be generic using a
classic theoretical view of the nature of participants. This leads to the trainings not
translating into expected benefits for the organization and frustration on the part of
a heterogeneous and specialized set of participants.
This case study also gives an example of a training program designed and
conducted with explicit inputs from the people who are impacted by the skills being
covered in the training. This training is shown to be highly successful in meeting
stakeholder objectives.
The design and implementation
managed and result in effective trainings if all the stakeholders and the key decision
makers like the training department managers, line managers, and top management
recognize the contributions and expectations of all the stakeholders involved and
drive accountability based on a common understanding of these contributions and
expectations.
and
Table 2
Examples
of Stakeholders'
Contributions
and Expectations
Contribution
Top Management
- Provide support in terms of resources,
infrastructure & overal1training strategy.
Trainees
- Participate in surveys and pilot programs
conducted to design and develop
trainings.
Instructional
Table 1 (Contd.)
- Perform further analysis of the audience
- Feedback
the materials
from delivered
training
prepared.
Trainers
- Spend time and effort preparing,
conducting and following up on training
programs.
- Act as change agents
organizational objectives.
to
drive
The Kirkpatrick Model (TKM) has been the prime framework for training
evaluation from the 1950s to late 1980s. It focuses on four levels of evaluation:
reactions, learning, behaviour, and results (Kirkpatrick, 1975). TKM is widely known
and accepted, even if it is rarely fully implemented. After more than 40 years, the
reigning framework for evaluating training-the four-level Kirkpatrick model-rarely
gets beyond the first level : trainee reactions or the "smiles test" (Nickols, 2005).
Nickols (2005) further says that many believe that it is a taxonomy of M&B and
therefore, it cannot be implemented by itself (e.g., Alliger & Janak, 1989; Holton,
1996; Wang et al., 2002).
Detennining the financial ROI of training (Phillips, 1997) is sometimes known
as the fifth level of evaluation. In addition, there are those who suggest that it is
possible and desirable to go beyond TKM and ROI to assess societal effect (Watkins
et aI., 1998).
Wang & Wilcox (2006) bring out the importance of organizations to
conduct summative evaluation after systematic training. First, summative evaluation
connects all the phases of the training process, Analysis, Design, Development,
Implementation, and Evaluation (ADDIE) (Anderson, 1993), with organizational
goals and objectives. It will not only justifY the training budget and human
resource development (HRD) investment but also validate implemented
interventions. More importantly, it demonstrates to the organization decision makers
the value of training interventions. Second, systematic summative evaluation may
discover the areas of training interventions that do not meet the stakeholders'
expectations. Such evaluation will certainly provide opportunities for future
improvement. Last, but not the least, summative evaluation may assist and support
future training and HRD investment. In today's competitive world, training and
HRD are frequently competing with all other functions for organizational resources.
Sound summative evaluation of systematic training demonstrates the accountability
of training and HRD functions and supports decision making regarding future
training investments (Wang and Wilcox, 2006).
However, summative evaluation has been experiencing difficulties and
challenges in research and practice areas of training (Wang & Wang, 2005). Recent
data from American Society for Training and Development (ASTD) showed that only
12.9% of the largest organizations have conducted any kind of training impact
evaluation(Sugrue & Rivera, 2005), the most important form of summativeevaluation.
Wang and Wilcox (2006) raise the question that if impact (summative) evaluation,
including retum-on-investment (ROI) measurement, is so important, why we still see
so few organizations actually conduct such evaluation in training reality?
Wang and Wilcox (2006) bring out a number of reasons for organizations
failing to conduct systematic evaluations. First, many training professionals either
do not believe in evaluation or do not possess the mind-set necessary to conduct
evaluation (Swanson, 2005). Others do not wish to evaluate their training programs
because of the lack of confidence in whether their programs add value to or have
impact on organizations (Spitzer, 1999). Wang and Wilcox also attribute lack of
evaluation in training to the lack of resources and expertise, as well as lack of an
organization culture that supports such efforts (Desimone et aI., 2002; Moller et aI.,
2000). Even for limited efforts in training evaluation, most are retrospective in nature
(Brown & Gerhardt, 2002; Wang & Wang, 2005). A study of a group of instructional
design practitioners indicated that 89.5% of those conduct end-of-course evaluation,
71% evaluate learning; however, only 44% use acceptable techniques for measuring
achievement. Yet merely 20% of those surveyed correctly identified methods for
results evaluation (Moller & Mallin, 1996). Brown and Gerhardt (2002) concluded
that companies expend even less effort in evaluating the instructional design process.
Nickols (2005) contends that these evaluation
tools are not implemented and used widely because current approaches to training
evaluation are primarily of interest to trainers but not to the many constituencies
served by training, trainers, and the training function. To these other constituencies,
current approaches to evaluating training are largely irrelevant. Adopting a different
approach to the evaluation of training, a stakeholder-based
problem of irrelevance.
In a similar vein, Brinkerhoff (2003) further argued that evaluation of training
is a whole organization
Instead, what is needed is evaluation of how well organizations use training. This
requires focusing evaluation inquiry on the larger process of training as it is integrated
with performance management and includes those factors and actions that determine
whether training can create performance results. By proposing a success case
method, Brinkerhoff demonstrated that effective training impact should involve all
relevant stakeholders of a training program (Wang and Wilkox, 2006).
A stakeholder model of managing training evaluation when it is viewed as
a part of performance management also gets support from Atkinson et al. (1997).
Some examples of stakeholders
and expectations
in this
Expectation
Top Management
- Provide support and resources to conduct
training evaluations on an ongoing basis.
Training Department
- Set up and execute the evaluation
mechanism
best suited for the
organization's information requirements
programs
that
help
the
- Feedback
programs.
and
- Their improved productivity
performance as a result of training will
be recognized and rewarded.
This article looks at the problem posed by the popular training evaluation
methods not providing effective inputs for managing the training process as a whole
on a day-to-day basis. It proposes a stakeholder-based approach to manage the
training process, which helps overcome these problems. The article reviews the
literature that covers the problems at each step of the training process and then
discusses ideas in support of a stakeholder-based approach as presented by various
authors. The article also gives some examples of stakeholders and their contributions
and expectations at each step of the training process.
References
Alliger, G M.; and Janak, E. A. (1989), "Kirkpatrick's Levels of Training Criteria: Thirty
Years Later," Personnel Psychology, 42(2), pp. 331-342.
Anderson, A. H. (1993), Successful Training Practice: A Manager's Guide to Personnel
Development, Cambridge, MA : Blackwell.
Arthur, w.; Bennett, W.; Edens, P. S.; and Bell, P. S. (2003), "Effectiveness of Training in
Organizations: A Meta-analysis of Design and Evaluation Features," Journal of
Applied Psychology, 88(2), pp. 234-245.
Atkinson, A. A.; Waterhouse, 1. H.; and Wells, R. B. (1997), "A Stakeholder Approach to
Strategic Performance Measurement," Sloan Management Review, 38(3), pp. 25-37.
Barnard, C. (1947), The Functions of the Executive, Cambridge, MA : Harvard University
Press.
Brancato, C. K. (1995), New Corporate Performance Measures:
A Research Report,
Conference Board.
Brinkerhoff, R. O. (2003), The Success Case Method, San Francisco, CA : Berrett-Koehler
Publishers.
Brown, K. G; and Gerhardt, M. W. (2002), "Formative Evaluation: An Integrative Practice
Model and Case Study," Personnel Psychology. 55(4), pp. 951-984.
Burke, M. J.; and Day, R. R. (1986), "A Cumulative Study of the Effectiveness of
Managerial Training," Journal of Applied Psychology, 71(2), pp. 232-245.
Chiu, W.; Thompson, D.; Mak, W.; and Lo, K. L. (1999), "Re-thinking Training Needs
Analysis", Personnel Review, 28(1/2), pp. 77-90.
Constable, C. J. (1988), Developing the Competent Manager in a UK Context. Report for
the Manpower Services Commission, U.K.
Courpasson, D.; and Livian, Y. F. (1993), "Training for Strategic Change: Some Conditions
of Effectiveness : A Case in the Banking Sector in France," The International Journal
of Human Resource Management, 4(2), pp. 465-479.
Delahaye, B. (1992), "A Theoretical Context of Management Development and Education,"
in Smith, B.J. (Ed.), Management Development in Australia (1-18), Sydney:
Harcourt Brace Jovanovich.
Desimone, R. L.; Werner, J. M.; and Harris, D. M. (2002), Human Resource Development,
Cincinnati, OH: South-Western.
Donaldson, T.; and Preston, L. E. (1995), "The Stakeholder Theory of the Corporation:
Concepts, Evidence, and Implications," The Academy of Management Review,
20(1), pp. 65-91.
Ford, J. K.; and Noe, R. A. (1987), "Self-assessed Training Needs:
The Effects of
Goldstein,
Training:
I. L. (1974),
CA : Brooks/Cole.
Goldstein,
: Needs Assessment,
Publishing
Development
and
Co.
Human
M. J. (1988),
"Training
and Development
in Work
Organizations,"
pp. 43-72.
Herbert,
GR.;
and Doverspike,
D. (1990),
"Performance
Appraisal
in the Training
Manufacturing
Sector,"
Training,"
Trends in Training
and Development,
II
The International
E. F. (1996),
"The Flawed
Four-level
Evaluation
Model,"
Human Resource
M. S. (1970),
"The Modem
Practice
of Adult Education:
The Association
WI : ASTD.
Andragogy
Versus
Press.
International
Labour Organization.
Annual Review of
M. J. (1997),
"Training
Needs Analysis:
Weaknesses
in the
153.
Loo, R. (1991), "Management
Journal of Management
Wiley.
McGehee, W.; and Thayer, P. W. (1961), Training in Business and Industry, New York:
Wiley.
Evaluative
Practices
Designers,"
Performance
P. (1996),
"Evaluation
Practices
of Instructional
Designers
and
Organizational
pp. 82-92.
Financial Times.
Approach to Evaluating
Training," Advances
in Development
Activities,"
Employees
pp. 291-302.
Paul, M. P.; O'Driscoll,
J. T.;and
Practice in Management
Skills Training?",
Human or
J. J. (1997),
: Jossey-Bass
Management
Publishers.
CA :
Jossey-Bass.
Saari, L. M.; Johnson, T. R.; McLaughlin,
of Management
Training
and Education
Practices
in US Companies",
Personnel
Organizational
Stakeholders",
and Self-leadership",
The
Company.
Strategies and
"Industry
Report
2000 : A Comprehensive
Analysis
of Employer-
Stephen, E.; Mills, 0. E.; Pace, 0. E.; and Ralphs, L. (1988), "HRD in the Fortune 500",
R. J. (2005),
"ASTD
2005
Report",
Alexandria, VA : ASTD.
a State of Mind", Advances in Developing Human
P. (1989), "Management
: Managers'
Identified
Needs
and Preferred
Training
Academy of
Strategies",
Applied Psychology:
pp. 112-134.
Thompson,
Longman.
Virmani, B. R.; and Seth, P. (1985), Evaluating Management Training and Development,
New Delhi : Vision Books, p. 68.
Wang, 0. G.; Dou, Z.; and Li, N. (2002),
Investment
to Measuring
Return on
Development
Evaluation:
Emerging
"Training
Evaluation
: Knowing
More Than Is
Wexley,
Improvement
with a Community
Plus: Evaluation
Wexley, K. N.; and Latham, G. P. (2002), Developing and Training Human Resources in