You are on page 1of 193

UMI Number: 3177145

UMI Microform 3177145


Copyright 2005 by ProQuest Information and Learning Company.
All rights reserved. This microform edition is protected against
unauthorized copying under Title 17, United States Code.

ProQuest Information and Learning Company


300 North Zeeb Road
P.O. Box 1346
Ann Arbor, MI 48106-1346
Copyright by
Jeeyon Paek
2005
ABSTRACT

The purpose of this study is to examine the impact of training program

characteristics on training effectiveness among organizations receiving training services

from external training providers. It is to evaluate training effectiveness as a function of

the nature of the relationship among client organizations and external training providers,

the training needs assessment, and the nature of the training program. In addition, it is to

investigate the relationship between the evaluations of training effectiveness as perceived

by client organizations and as measured by financial performance. The literature review

identified four variables to examine training effectiveness: evaluation of training,

partnership training between client organizations and educational institutions, training

needs assessment, and the nature of the training program. Two survey instruments were

developed to measure the variables. One survey asked HRD managers about training

program characteristics, and another survey asked senior managers about the perception

of training effectiveness and operational margin information. Surveys were sent to

companies who received training funds from the Ohio Investment in Training Programs

from 2002 to 2004. Forty five out of 125 companies completed both surveys, and thus,

the response rate was 36 percent. The collected data was merged with the some

demographic information from the OITP database.

ii
The results showed that most participant organizations are privately owned,

manufacturing companies. The major external training providers are private organizations.

Few companies engage in partnership training with educational institutions. The results

showed that the operational margin of the programs where private training providers were

involved increased more than the programs that did not involved private training

providers. In addition, if external training providers were involved in more stages of the

training process, operational margin increased. The results also showed that senior

managers perceived the entirely developed training program was more effective than the

generic, standard programs or customized programs. There was no documented

relationship between training needs assessment and training program effectiveness. This

study provides several implications on future study in HRD area as well as practitioners

in business, workforce development policy, and workforce development practitioners in

higher education.

iii
Dedicated to my sisters,

Soyon, Seokyon, Nayon, and Ohyon

iv
ACKNOWLEDGMENTS

I would like to express sincere gratitude to my adviser, Dr. Joshua D. Hawley, for

his endless encouragement and consistent support. His encouragement and considerate

assistance made my journey of doctoral study complete, though in many times I lacked

patience and endurance to pursue the study. His generous understanding and support

helped me be more confident in my abilities to do this study. I also appreciate the way he

shared his knowledge and experience in Workforce Development & Education Policy.

I want to convey my deep appreciation to Dr. Ronald D. Jacobs who was on my

committee in Human Resource Development, as well as, my boss for the last four years.

He always challenged the logic and ideas that I brought to him. I cannot help admitting

his continuous challenges improved the quality of this work, but also the quality of who I

am as a young scholar. I was blessed to work with him as a GRA from the beginning of

my study. I appreciate his trust and experienced coaching.

I also thank Dr. Joe A. Gliem, my research methodology committee member. I

learned fundamental, essential research methodologies through classes and meetings with

him. I appreciate his insightful teaching and support of my work.

A special thank goes to Dr. Ada Demb, who served on my general exam

committee in Higher Education. I appreciate the role model she provided as a woman

v
scholar and learning from her about international view points and acceptance of a

diversity of culture.

I would like to thank Ms. Jamie Klinger and Ms. Carla Wood at Ohio Department

of Development for their proactive support in data collection. Although collecting data

required tremendous effort, their help greatly relieved my burden. I also appreciate their

enthusiasm to support organizations through training and educating workforce.

In addition, I would like to extend my appreciation to Dr. Ae-Kyung Choi at

Ewha Womans University. She has been my lifetime mentor for the last 15 years and has

guided not only my academic and professional life but also my personal values. I deeply

appreciate her caring and love.

I also owe gratitude to so many of my friends at The Ohio State University. They

shared their wisdom and experiences about how to be successful in this academic

program. I want to thank them for their friendship and for our mutual learning. I also

want to thank sisters and brothers at the Korean Church of Columbus for their prayer in

the Lord.

I do not know how to express my gratitude to my parents, Mrs. Eun-Suk Lee and

Mr. Ju-Hyun Paek. Without their support and trust, I might not have been able to start

this study. They have taught me how to follow our Lord through everyday their lives.

Their belief encouraged me to prepare myself as a scholar.

Above all, I would like to praise the Lord who prepares my way and completes

His good will through my life. He has expressed His great love and support to me

through every day, intimate living with Him. I joyfully appreciate everything He has

provided me in this work, and hope to glorify Him through this work.

vi
VITA

March 29, 1971 ………………………….Born – Seoul, Korea

1994 ……………………………………...B.A. Secretarial Science,


Ewha Womans University

1994 – 1995 ……………………………...Secretary/Client Support, Gartner Group, Inc.

1996 – 1997 ……………………………...Associate Consultant, The Optima Consultants

1998 ………………………………….......Business Analyst, Oppenheimer Funds

1998 – 1999 ……………………………...Teaching Fellow, New York University

1999 ……………………………………...M.B.A. Finance, New York University

1999 - 2000……………………………….Lecturer, Ewha Womans University

2000 ……………………………………...Certificate, Strategy and Entrepreneurship in


the IT Industry, Stanford University

2000 – 2001 ……………………………...Team Manager, Korea Bond Pricing Co.

2002 – 2005 ……………………………...Graduate Research Associate,


The Ohio State University

PUBLICATIONS

Research Publication

1. Paek, J. (2004, March). A systems approach to mentoring: A literature review. In


T. M. Egan (Eds.), Proceedings of the Academy of Human Resource Development (pp.
367-374). Austin, Texas: Academy of Human Resource Development.

vii
2. Paek, J. (2002, October). Women workforce development in Korea: Issues and
environmental forces. Proceedings of the SOM Conference on Globalization, Innovation
and HRD for Competitive Advantages (pp.229-236). Bangkok, Thailand.

3. Hawley, J. D. & Paek, J. (2005, March). Developing human resources for the
technical workforce: A comparative study of Korea and Thailand. International Journal
of Training and Development, 9(1), 79-96.

4. Hawley, J. D. & Paek, J. (2004, November). Developing human resources for the
technical workforce: A comparative study of Korea and Thailand. In Y. Moon, A. M.
Osaman-Gani, S. Kim, G. L. Roth, & H. Oh (Eds.), Proceedings of the Third Asian
AHRD Conference, (pp. 328-335). Seoul, Korea.

5. Jacobs, R. L., Wanstreet, C. E., & Paek, J. (2004, May). Using system theory to
evaluate organizational change in an Indian software firm: A case study. Proceedings of
the Fifth Conference on HRD Research and Practice across Europe. Dublin, Ireland.

6. Lee, C. & Paek, J. (2003, May). Exploring strategic staff development in higher
education institutions. Proceedings of the Fourth Conference on HRD Research and
Practice across Europe. Toulouse, France.

FIELDS OF STUDY

Major Field: Education

Workforce Development and Educational Policy Joshua D. Hawley, Ed.D.


Human Resource Development Ronald L. Jacobs, Ph.D.
Research Methods and Statistics Joseph A. Gliem. Ph.D.
Higher Education Administration Ada Demb, Ed.D.

viii
TABLE OF CONTENTS

Page

Abstract …………………………………………………………………………………..ii
Dedication ……………………………………………………………………………….iv
Acknowledgments ………………………………………………………………………..v
Vita ……………………………………………………………………………………...vii
List of Tables …………………………………………………………………………….xi
List of Figures …………………………………………………………………………..xiv

Chapters:

1. Introduction …………………………………………………………..……… 1

Statement of the Problem ………………………………………………... 4


Research Questions ……………………………………………………….6
Definition of Terms ……………………………………………………….7
Limitations ………………………………………………………………10
Significance ……………………………………………………………...11

2. Review of Literature ………………………………………………………...15

Evaluation of Training …………………………………………………..15


Training Evaluation in Human Resource Development ………...15
Training Evaluation in Labor Economics ……………………….18
Evaluation of State-Funded, Employer-Based Training ………...20
Measurement of Organizational Performance …………………..23
University-Industry Partnership and Partnership Training ……………...30
University-Industry Partnership …………………………………30
Empirical Works of Partnership Training ……………………….34
Partnership Training ……………………………………………..42
Training Needs Assessment ……………………………………………..45
Definition of Needs Assessment & Training Needs Assessment..45
Ways of Conducting Needs Assessment ………………………...46
Roles of the Needs Assessment in Training …………………….49
Nature of the Training Program …………………………………………51
Customization …………………………………………………...52
Relationship to Job ………………………………………………58
Conceptual Framework ………………………………………………….59
ix
3. Methodology ………………………………………………………………...63

Research Type …………………………………………………………...63


Research Setting and Sample ……………………………………………65
Research Setting …………………………………………………65
Sample …………………………………………………………...67
Operationalization of Variables …………………………………………70
Instrument Development ………………………………………………...74
Instrument Validity ……………………………………………...75
Instrument Reliability …………………………………………...75
Design of the Instruments ……………………………………….76
Research Procedures …………………………………………………….78
Data Collection ………………………………………………….78
Data Analysis ……………………………………………………80

4. Results ……………………………………………………………………….. 83

Descriptive Statistics …………………………………………………….83


Respondents……………………………………………………...88
Findings on Research Questions ………………………………………...96
Research Question One ………………………………………….96
Research Question Two ………………………………………..103
Research Question Three ………………………………………112
Research Question Four ………………………………………..114
Research Question Five ………………………………………..127

5. Summary, Discussions, and Implications……………………………………129

Summary of Findings …………………………………………………..129


Discussions ………………...…………………………………………..133
Implications……………………………………………………………..138
Implications for Future Research……………………………….138
Implications for Practice and Policy……………………………142

References …………………………………………………………………….. 145

Appendices ……………………………………………………………………. 154

Appendix A: Training Program and a Relationship with Training Provider


Survey Questionnaire
Appendix B: Training Effectiveness Survey Questionnaire
Appendix C: Cover Email Scripts for both Surveys
Appendix D: Supporting letters from OITP for each Survey

x
LIST OF TABLES

Table Page

3.1 Cronbach’s Alpha Coefficients for the Survey Responses (Senior manager survey,
n=44)……………………………………………………………..…….…...……76

4.1 Frequency and Percentage of Type of Industry among Respondent Companies,


Non-respondent Companies, and Not-in-Sample Frame Companies (OITP
database) …………………………………………………………………...……86

4.2 Company Size, Number of Participants in Training, and Base Wage of Training
Participants among Respondent Companies, Non-respondent Companies, and
Not-in-Sample Frame Companies (OITP database)………………………..……87

4.3 Demographic Information for Manager Survey Respondents (Manager survey,


n=45)……………………………………………………………………………..91

4.4 Demographic Information for Senior Manager Survey Respondents (Senior


manager survey, n=45)…………………………………………………….…….92

4.5 Demographic Information of Participant Companies (OITP database and Manager


survey, n=45)…………………………………………………………………….93

4.6 Demographic Information of the Training Programs (Manager survey,


n=45)……………………………………………………………………………..94

4.7 Frequencies and Percentages of Type of Training Providers (Manager survey,


n=45)……………………………………………………………………………..97

4.8 One-way ANOVA on Type of Training Provider-Perception on Effectiveness of


this Specific Program (Manager survey and Senior manager survey,
n=43)………………………………………………………………………..……98

4.9 One-way ANOVA on Type of Training Provider-Perception on Relative


Effectiveness (Manager survey and Senior manager survey, n=43)……………..99

4.10 One-way ANOVA on Type of Training Provider-Increase in Operational Margin


(Manager survey and Senior manager survey, n=13)…………………………..100

4.11 Independent Samples t-test on In-house Training Staff and Training Program
Effectiveness (Manager survey and Senior manager survey)…………………..101

xi
4.12 Independent Samples t-test on Educational Institution Providers and Training
Program Effectiveness (Manager survey and Senior manager
survey)…………………………………………………………………..………102

4.13 Independent Samples t-test on Private Training Providers and Training Program
Effectiveness (Manager survey and Senior manager survey)…………………..103

4.14 Summary Data: Regression of Perception of the Specific Training Program


Effectiveness on Selected Variables in the Relationship with External Training
Providers (Manager survey and Senior manager survey)………………..……..106

4.15 Standard Multiple Regression of Perception of the Specific Training Program


Effectiveness on Selected Variables in the Relationship with External Training
Providers (Manager survey and Senior manager survey, n=28)…………..……107

4.16 Summary Data: Regression of Perception of the Relative Training Program


Effectiveness on Selected Variables in the Relationship with External Training
Providers (Manager survey and Senior manager survey)………………..……..108

4.17 Standard Multiple Regression of Perception of the Relative Training Program


Effectiveness on Selected Variables in the Relationship with External Training
Providers (Manager survey and Senior manager survey, n=28)…..……………109

4.18 Summary Data: Regression of Increase in Operational Margin on Selected


Variables in the Relationship with External Training Providers (Manager survey
and Senior manager survey)……………………………………………...……..110

4.19 Standard Multiple Regression of Increase in Operational Margin on Selected


Variables in the Relationship with External Training Providers () (Manager
survey and Senior manager survey, n=10)……………………………………...111

4.20 Independent Samples t-test on Training Needs Assessment and Training Program
Effectiveness (Manager survey and Senior manager survey)……………….….112

4.21 Correlation Matrix between the Quality of Training Needs Assessment and
Training Program Effectiveness (Manager survey and Senior manager
survey)…………………………………………………………………….…….114

4.22 Frequencies and Percentages on Intended Goal of Training, Level of Relationship


to Job, and Level of Customization (Manager survey, n=45)…………….……116

4.23 Distribution on Training Participants’ Portion among Total Employees and Level
of Customizations (Manager survey, n=45)……………………………………117

xii
4.24 Summary Data: Regression of Perception of the Specific Training Program
Effectiveness on Selected Variables in Training Program Characteristics
(Manager survey and Senior manager survey)………………………………....121

4.25 Standard Multiple Regression of Perception of the Specific Training Program


Effectiveness on Selected Variables in Training Program Characteristics
(Manager survey and Senior manager survey, n=37)………………………......122

4.26 Summary Data: Regression of Perception of the Relative Training Program


Effectiveness on Selected Variables in Training Program Characteristics
(Manager survey and Senior manager survey)………………………………....123

4.27 Standard Multiple Regression of Perception of the Relative Training Program


Effectiveness on Selected Variables in Training Program Characteristics
(Manager survey and Senior manager survey, n=37)………………………......124

4.28 Summary Data: Regression of Increase in Operational Margin on Selected


Variables in Training Program Characteristics (Manager survey and Senior
manager survey)…………………………………………………………….......125

4.29 Standard Multiple Regression of Increase in Operational Margin on Selected


Variables in Training Program Characteristics (Manager survey and Senior
manager survey, n=10)………………………....................................................126

4.30 Correlation Matrix between Perceptions on Training Program Effectiveness and


Increase in Operational Margin (Senior manager survey, n=43)……….............128

xiii
LIST OF FIGURES

Figure Page

2.1 Conceptual Framework for the Study of Training Program Characteristics and
Training Effectiveness among Organizations Receiving Services from External
Training Providers…………………………………………………….…………62

4.1 Number of Survey Participant Companies in each County in Ohio (OITP database,
n=45)………………………………………………………………………..……95

5.1 Revised Conceptual Framework……………………………….……………….141

xiv
CHAPTER 1

INTRODUCTION

Sustaining competitiveness is one of the most critical challenges to business

organizations (Porter, 1996). Rapid advancements in technology, information and trading

lead to changing business conditions. Competition, segmented markets, and greatly

diversified customers have also impacted the business environment. Those constant

changes require organizations to focus on being competitive, in particular, the

competitiveness of their human resources.

Human resource is considered as one of the most significant resources to business

organizations (Swanson & Holton, 2001). As societies became more knowledge-based,

and as the proportion of knowledge workers in business organizations increases, human

resources become more critical (Jamrog, 2004). Business organizations emphasize

maintaining and developing human resources.

Human resource development (HRD) refers to a planned process for improving

organizational performance through training and employee development, career

development, and organizational development (Jacobs, 2001b). HRD focuses

organizational efforts for individual development towards organizational performance

improvement and ultimately to the sustainability of organizational competitiveness.

1
Training as an HRD intervention has played a significant role in improving

organization performance (Jacobs & Washington, 2003). The purpose of training is to

create job performance outcomes as well as to enhance employee’s knowledge and skills

(Lewis, 1996). Although business organizations are able to generate, develop, and

maintain employee competitiveness through training, as well as recruiting and placement,

training has been utilized as a major means for sustaining current employee development

and ultimately for improving organizational performance. Thus, companies have

increased training expenditures to sustain their employee’s competitiveness. In particular,

fast-growing companies have dedicated substantial amount of time to the professional

development of their employees (Cronin, 1993).

Organizations are increasingly receiving training services through outsourcing

and partnerships with other organizations (Osterman, 1995; Knoke, 1997; Stewart, 1999).

In fact, in this fast changing environment, it is very difficult for business organizations,

particularly high-tech companies with a high percentage of knowledge workers, to

provide all the necessary training programs to meet their organization’s training needs

internally (Knoke & Janowiec-Kurle, 1999). Business organizations may hire outside

consultants to identify training needs, hire training professionals to develop a specific

training program for their needs, or only hire instructors. Also, business organizations

may select outside training providers for their entire training program design and

implementation. Regardless of the type of outsourcing, outsourced training expenditures

for U.S. companies sharply increased from 9.9 billion dollars in 1994 to 19.3 billion

dollars in 2000 (ASTD, 2002).

2
One of the most important sources of training among external training providers

are higher education institutions (Carnevale, 1998; Hagen 2002; Johnston, 2001;

Johnstone, 1994). Colleges and universities provide human resource development

services for organizations through partnerships. As business organizations receive greater

benefits from partnerships, partnerships between industry and university have expanded

(Campbell & Slaughter, 1999; Hagen, 2002; Normile, 1996). Government agencies also

consider universities as external providers of training and development, reflecting the

need for continuous professional development, for flexibility and for continuous

adaptability to change (Mavin & Bryans, 2000). However, it is very challenging for

higher education to help increase the effectiveness of the workforce (Hanna, 2001;

Lumby, 1999).

Community colleges provide much of the training and education services to

business and industry in comparison to other higher education institutions in the U.S.

Overall, university–industry partnerships have increased, and, in particular, training

programs by community colleges (Johnstone, 1994; Lynch, 1991). Because one of the

major goals of community colleges is to train and educate the current and future

workforce for their communities, community colleges have been involved in training and

education intensively and actively (Demb, 2003).

As partnership training programs provided by external training providers have

increased, the types of training programs also changed to meet the various training needs

of organizations. The training of subjects ranges from basic writing and math skills to

high tech skill training like computer software and managerial types of training (Cappelli

et al., 1997).

3
Statement of the Problem

Rapid changes in the business environment and competitive market conditions

have required business organizations to sustain their competitiveness through employee

development (Jacobs, 2003). Business organizations have emphasized the significance of

training and made great efforts to improve training quality in order to sustain

competitiveness and improve performance (Jacobs & Washington, 2003). However, due

to limited resources, business organizations have increasingly utilized external sources

for identifying training needs, and for developing and implementing training programs

(Knoke & Janowiec-Kurle, 1999).

Business organizations acquire training related services from external training

providers such as training agencies, consultants, and educational institutions (Knoke,

1997; Sole 1999). Among outside training providers, community colleges have actively

provided a range of training related services. The characteristic of partnership training

between external training providers and client organizations has become one of the major

factors that can influence the impact of training (Hardingham, 1996).

As partnership training has become more common, the nature of the relationships

among client organizations and external training providers, especially in partnership

training, varies (Hawley et al., 2005). Some partnership training programs with

community colleges have formal contracts while others may not. Some participants in the

partnership training complete the training program with or without academic credentials

such as certificates, licenses, or degrees. External training providers’ level of involvement

and their history of the relationships might vary. In addition, different types of training

4
providers add to the diversity of the relationship among client organizations and external

training providers.

As partnership training has become more widespread, the training needs

assessments vary. Business organizations have different training needs and resources and

have their own training design process. At the same time, external training providers also

have different resources and experience in training design process and implementation.

Thus, some training design processes in partnership training programs heavily involve

needs assessment while others may not. Therefore, partnership training is widely

implemented with various quality of training needs assessment.

It is widely believed that partnership training is beneficial to business

organizations (Ellis & Moon, 1998; Gold et al., 1998; Hall & Scott, 2001; Hardingham,

1996; Ryan & Heim, 1997; Mavin & Bryans, 2000; Roever, 2000). Business

organizations often have partnership training programs simply because of the need to

develop a full range of training programs to meet organization’s needs. Previous research

on partnership training has found that training programs serve various needs including

organizational development and employee development. (Ryan & Heim, 1997; Roever,

2000; Aslanian, 1988; Johnstone, 1994))

It is also believed that the outcomes of training for organizations are dependent

not only on the quality of the training needs assessment and the nature of the training

program such as the type of training and extent of customization, but also on the nature of

the relationships with external training providers. Although the partnership training

provided by external training providers has unique characteristics in the nature of the

relationship among client organizations and external training providers, training needs

5
assessment, and the nature of the training program, there is little information about

whether those unique relationships, training needs assessment, and the nature of the

training program relate to any outcomes of training. In other words, literature provides

little information whether the training effectiveness differ if the nature of the provider-

client relationships, training needs assessment, and the nature of the training programs

differ in partnership training.

Therefore, the purpose of this study is to examine the impact of training program

characteristics on training effectiveness among organizations receiving training services

from external training providers. It is to evaluate training effectiveness as a function of

the nature of the relationships among client organizations and external training providers,

the training needs assessment, and the nature of the training programs. In addition, it is to

investigate the relationship between the evaluation of training effectiveness as perceived

by client organizations and as measured by financial performance.

Research Questions

The research questions to be examined in this study are as follow:

1. Does training program effectiveness differ based on types of training providers?

Does the training provided by community colleges and/or four-year universities

result in a higher degree of training program effectiveness in comparison to other

external and/or internal training providers?

2. Does the degree of training program effectiveness differ based on the nature of

the relationship among client organizations and external training providers?

6
3. Does the degree of training program effectiveness differ based on the quality of

provider’s training needs assessment?

4. How do training programs vary in terms of the extent of customization, type of

training, relationship to job, proportion of participants verses total employees,

and expected outcome? How do these differences in the nature of the training

program impact on training program effectiveness?

5. Are perceived training program effectiveness and client organization’s financial

performance (operational margin) related? Is perceived training effectiveness a

good indicator of financial performance? Or vise versa?

Definition of Terms

The major terms for this study have been operationally defined as follows:

Partnership. A form of cooperative work between and/or among organizations

striving toward an immediate, common goal(s). Partnership and collaboration are

interchangeable. (Jacobs, 1999; Starbuck, 2001)

University-industry partnership. A form of cooperative work in which the focus is

to ultimately improve current practice, performance, and development between colleges

or universities and organizations (Rohdes, 2001; Rowley, et al, 1998). There are a wide

range of formats including research projects, training and education courses, placement,

college curriculum development partnerships, and consortium programs.

University-Industry partnership training. A form of training provided by

educational institutions such as colleges and universities for client organizations in both

7
the private and public sectors. The ultimate goal of the training is to improve current

practice, performance, and development.

Partnership training (outsourced training or outside training). A form of training

provided by professional training organizations including private companies, educational

institutions such as colleges and universities, vocational schools, and public supported

training centers for organizations both in the private and public sectors (Allen, 2002;

Aslanian, 1988)

Business organization. An organization which provides products and/or services

to maintain their presence. These organizations are current users of training programs or

potential training program users. They can be private companies, public or government

agencies, or non-for-profit organizations.

Client organization. A business organization that purchases and receives any

training services. Training services include training needs assessment, training program

design, instructional materials, training courses, training instructor, or the delivery of the

entire training program.

Employer-focused training. A training program that is planned by an employer to

increase organizational performance through a specific training program that is linked

with their organizational missions and/or outcomes (GAO, 2004).

Training providers. An entity that provides a training service(s) such as needs

assessment, program design, instructional materials, courses, instructors, or the delivery

of the entire training program. Training providers can be educational institutions such as

community colleges or universities, vocational centers, adult learning centers, private

training vendors, and public training agencies.

8
Training design process. A training provider’s process to develop training

programs. It includes work analysis, needs assessment, feedback, program design,

instructional strategy development, delivery methods and media selection, and evaluation.

Training needs assessment. Process used by the organization to identify the needs

of a company for training.

Nature of the provider-client relationship. A unique composition of several

components in the relationship among client organizations and external training providers.

One example of the nature of the relationship is how to form a relationship between client

organizations and training providers through different types of contracts. Other examples

are; 1) level of involvement of external training providers in the training process, 2)

contract history between a client organization and an external training provider, 3)

follow-up contact, and 4) the external training providers’ knowledge about the business

of client organizations.

Nature of the training program. A unique composition of several components in

the training programs. The extent of customization, training program’s relationship to job,

the type of training, expected outcome are example of components in the training

programs.

Customization. Degree to which the training provider changes aspects of an

existing course to meet the client organization’s employee’s needs.

HRD manager. Persons who are mainly responsible for HRD tasks such as work

analysis, training and education programs, and career development for employees. For

the purpose of the study, a training coordinator or training manager can be referred to as a

HRD manager.

9
Training effectiveness. Degree to which the training reaches the intended

objective(s) or immediately expected outcome, which is planned ahead.

Perceived training effectiveness. Perceived value on the level of which the

training reaches the intended goal or expected outcome.

Senior manager. Persons those who are in charge of overall operation in business

organizations and who persistently pursue operational efficiency via their management

role.

Training impact. The outcome and consequence of training result on the client

organizations. These organizational results include performance improvement, enhanced

competitive readiness, quality management, and increase renewal capabilities (Goldberg

& Ramos, 2003; Lupton et al., 1999; Robinson & Robinson, 1989; Warr & Bunce, 1995).

Financial performance. A result of organizational activities which can be

described in a monetary form.

Workforce development. Collective activities among organizations, educational

institutions, and governmental agencies, that provide employment and training services

for individual as well as for organizations (Jacobs & Hawley, 2003).

Limitations

The limitations of this study are described as follow:

1. Any generalizations from the results of this study are limited to the population of

the Ohio Investment on Training Program recipient business organizations in

Ohio, U.S. and their training providers.

2. The results of the study are limited by the instruments used.

10
3. The study is limited by training programs for currently employed workers, not

internships nor the future or retiring workforces.

4. The study is limited to employer-focused training programs, not employee-

directed learning and/or training programs.

5. The survey is self-report in nature.

Significance

The results of this study will add knowledge to the field in three theoretical and

practical areas: (1) university-industry partnership training, (2) workforce development

policy, and (3) human resource development.

The study will contribute to research on university-industry partnerships.

Although numerous studies have attempted to prove benefits from university-industry

partnership projects, few studies conducted empirical research that evaluated the impact

of university-industry partnership projects. Most of the studies were descriptive or case-

studies based (Keithley & Redman, 1997; Lynch et al., 1991, McMurtrie, 2001; Normile,

1996; Otala, 1994; Powers & Powers, 1988b; Roessner et al., 1998’ Santoro & Betts,

2002). By examining the effectiveness of partnership training programs, the study will

present numerical evidence on the effectiveness of partnership training programs on the

client organizations.

This study will bring valid research to workforce development policy analysts as

well as policy makers. Many government agencies tried to stimulate their local economy

through workforce development policy. Partnership training programs are one area to be

encouraged by policy to train and educate the current, as well as the future workforce, to

11
assist the local economic development. Specifically, the result of the study will assist

workforce development policy makers to reinforce the workforce development policy

more effectively and efficiently by utilizing all the current and potential resources in the

local areas. The policy can be developed toward creating synergy among local economy

needs, capability of workforce, and higher educational functions. Workforce development

policy analysts will also gain a research-based analysis tool to evaluate their workforce

development policy in terms of not only outcome level, but also process level.

Numerous studies have been conducted about training needs assessment and

customization and their relationship to the outcome of HRD training (Brown, 2002;

Goldstein, 1993; Kaufman et al., 1993; McClelland, 1994a, 1994b, 1994c, 1994d; Wright

& Geroy, 1992). However, this study will detail the difference between training programs

conducted through partnership projects. This study not only will add to the research of

training needs assessment and customization in HRD but also to the research on

outsourced training, including partnership training with higher education institutions.

The results of the study will improve current HRD practice. Organizations are

engaging with numerous types of external training providers and receive various forms of

training services. The data gathered from this study will help HRD practitioners to make

decisions about not only who will be selected as an external training provider, but also

how to engage in the outsourcing process including training needs assessment and

forming the provider-client relationship. HRD practitioners can choose characteristics of

partnership training programs and can determine the level of involvement in the training

needs assessment process while considering expected results from their decisions.

12
Ultimately, they can gain assistance in developing the most appropriate training programs

to fit their HRD strategies for their organization.

In addition, the study will enhance the understanding of the evaluation of training

programs. Conventionally, HRD research has focused on assessing training effectiveness

by measuring training results at the individual level including learning and change

behavior, but little research has been conducted to measure the impact of an

organizational level. On the other hand, labor economists have focused on measuring the

aggregate of individual employee’s productivity to assess organizational performance.

However, there has been little research linkage between training effectiveness and

training impact on organizational performance. The results of this study will provide

information on how training effectiveness and organizational performance are related,

and whether one can be a good indicator of another. The information about this

relationship will be significant to training evaluation.

The findings of this study will reveal senior managers’ awareness about training

impact and how they perceive training links with their organizations’ mission. Thus, it

will provide business leaders with critical knowledge about their significant role in

training evaluation, and provide organizations with sufficient rationale to assist their

leaders to be equipped with understanding of training.

One of the most common functions of community colleges is to train and educate

current and future workforces. Many higher education scholars have studied the

economic function of community colleges and asserted that partnerships with business

industries are inevitable for their existence (Bowie, 1994; Campbell & Slaughter, 1999;

Osterman, 1995). Because the result of this study will describe how community college

13
training services impact their local business, higher education scholars can make a strong

case to support their studies about benefits of university-industry partnership.

In addition, workforce development practitioners in higher education will have

useful knowledge from this study. The findings of the study will assist workforce

development practitioners in higher education to increase effectiveness of institutions’

functions of training and education and to reach their organizational missions.

Knowledge on improving effectiveness of their current practice in engaging with

business and industries and providing training services will be provided to workforce

development practitioners in higher education.

Overall, the results of the study will provide critical information about the

effectiveness of partnership training to a wide range of professionals: HRD scholars and

HRD practitioners, business leaders, educational scholars and leaders, as well as policy

makers for business industry and labor issues.

14
CHAPTER 2

REVIEW OF THE LITERATURE

This chapter discusses review of literature in four areas. They are evaluation of

training, partnership training, training needs assessment, and nature of the training

program. In addition, this chapter proposes a conceptual framework based on the

literature review.

Evaluation of Training

This section is divided into four parts of review of literature on training evaluation.

The first part reviews HRD literature on training evaluation, and the second part of

review discusses economic literature on training evaluation. The third part discusses

evaluation of state-funded, employer bases training. The last part reviews business

literature on measurement of organizational performance

Training Evaluation in Human Resource Development

The evaluation of training has been studied by labor economists and by human

resource development (HRD) scholars. From the HRD perspective, evaluation of training

examines training impact on organizational goals and strategies and on individual

performance requirements. Because HRD is based on systems theory, the evaluation of

training is considered as a tool for continuous improvement at the individual, process,

15
and organizational levels (Brinkerhoff & Gill, 1994). HRD literature views the

evaluation of training as not a single event but as a process as part of systems of

organization. Therefore, HRD scholars have developed evaluation models to represent

systemic approaches for linking with organizational goals.

One of the best known and most widely used frameworks for classifying

evaluation is the Kirkpatrick model. He originally proposed the model as steps in 1959

and described the model as levels in 1996 (Kirkpatrick, 1996). Kirkpatrick’s four levels

are:

- Level 1. Reaction: what the participants thought of the program, normally

measured by the use of reaction questionnaires.

- Level 2. Learning: the changes in knowledge, skills, or attitude with respect to

the training objectives, normally assessed by use of performance tests.

- Level 3. Behavior: changes in job behavior resulting from the program, to

identify whether the learning is being applied. Assessment methods include

observation and productivity data.

- Level 4. Results: the bottom-line contribution of the training program.

Methods for measuring results include measuring costs, quality and return on

investment (ROI).

The strengths of the Kirkpatrick model lie in its simplicity, usefulness, and

pragmatic way of helping practitioners think about training programs. It is easily

comprehended and makes sense to organizations and has become the most commonly

adopted model or framework on training evaluation (Alliger & Janak, 1989). Although

16
there have been criticisms of the Kirkpatrick model, numerous studies in evaluation

training have been conducted based on this model.

Since the Kirkpatrick model was introduced, several modifications of this model

have been developed by various researchers to attempt to design a better evaluation

model from organizational perspective. For example, Phillips (1996) added a fifth level—

ROI level—to separate the assessment of the monetary benefits of the training compared

to its costs.

Another expanded model from Kirkpatrick’s original book is the organizational

elements model by Kaufman and Keller (1994). They argued that the Kirkpatrick model

was useful for only evaluating training, and that the model needed to be modified since

organizations wanted to evaluate other types of development events. They expanded the

Kirkpatrick model by adding societal contribution level as an evaluation criterion.

Although HRD research has put great effort on developing a model to aid

conceptual thinking about the evaluation of training, especially from an organizational

development perspective, few empirical studies were found due to limits of inability of

models to distinguish the actual impact of training from other variables on benefits (Gray

& Herr, 1998).

Ahlstrand, Bassi, and McMurrer (2003) analyzed available information in the

1997 and 1998 American Society for Training and Development (ASTD) databases based

on the Kirkpatrick model. In order to identify training impact for lower-wage workers

from employer’s providing training programs, they distinguished the courses designed for

lower-wage workers by participant characteristics. They divided the participants into two

levels: participants had fewer than 12 years of formal education, or participants in the

17
course who earned equal to or less than ten dollars per hour. In the total database of 831

courses, less than ten percent of the training courses were oriented to lower-wage workers.

In the initial evaluation immediately after completion of the training course,

participant’s reaction—level one in the Kirkpatrick model—was measured whether

participants’ knowledge or skills increased as a result of the course or whether their

newly earned knowledge or skills are applicable to their current job. It was found that the

reaction level of the participants in the lower-wage worker oriented courses was less

favorable than those in other courses. In the follow-up evaluation, usually three to six

months after completion of the course, participants’ performance change was measured

by requesting the supervisor’s assessment. It was found that performance improvement of

those workers who participated in the lower-wage worker oriented courses was much

higher than those in other courses, though it was not statistically significant (Ahlstrand et

al., 2003).

Training Evaluation in Labor Economics

Economists have also studied training evaluation from a slightly different point of

view. Although both HRD scholars and labor economists applied the human capital

theory to their research, the economists’ approach is more empirical and focuses more on

return on investment of human capital. The human capital approach assumes that workers

choose post-school human capital investments (training or education) which maximize

their lifetime earning so that workers pay the full cost of general training and earn the full

return (Barron et al., 1999; Borjas, 2000).

One of the key distinguishing features in the labor economics’ training evaluation

is the concept of present value. The training cost is paid in current dollars while the

18
training ROI is expected to be gained in the future. Hence, the expected return should be

calculated at the present value, which is discounted by the long-term interest rate when

considering opportunity cost (Becker, 1964; Borjas, 2000).

Another key feature is that economists have conducted a number of empirical and

theoretical studies about on-the-job training. On-the-job training in labor economics is

less clearly defined than in the HRD area. It is generally understood as all the training

programs that are related with training receivers’ current job and that are provided by

employers for their employees including in-class training and on-site training (Barron et

al., 1999; Borjas, 2000).

Economists used experimental data to find out how this training works. For

example, many research questions are related to the effect of training on earnings or

employment rate or duration, which require empirical data to provide answers (Acemoglu

& Pischke, 1999). Even though there are concerns regarding using only simple estimators

to assess training returns, and even though they admit limits of inability to assess non-

feasible returns from training, it is widely used (Ham, 1994).

Barron, Berger, and Black (1999) studied the relationships among on-the-job

training, starting wages, wage growth and productivity growth. Two survey data sets

were used: (1) an Employment Opportunity Pilot Program surveyed 5,700 employers and

gained a working sample of 756 workers in 1980 and 1982 and (2) a survey of 3,600

firms in the Small Business Administration in 1992 gained a working sample of 1,323.

Training was measured by time spent in formal training, time spent in informal training

with supervisors or coworkers, and time spent watching others perform. The study used

19
several dummy variables such as years of education, industry, occupation, and unionized

status and controlled relevant work experience and worker ability.

However, the study’s findings were different from the hypotheses based from

human capital theory. The study found that workers who required less training are more

likely to earn a higher starting wage but those who required more training than the typical

worker did not receive a lower starting wage. It means that companies bear the major

portion of training, not workers. Based on the human capital theory, firms do not pay for

the general training, but the survey represented more than 60 percent of training was

general training, and only eight percent of training was pure firm specific. General

training is that new knowledge and skills learned by new employees in this training are

useful outside of the company while specific training is that new knowledge and skills

learned are too firm-specific to be used in the outside of the company (Borjas, 2000). The

study also found that wage growth was weakly correlated with training but productivity

growth was highly correlated with training. Thus, the impact of training on productivity

growth is much larger than the impact of training on wage growth (Barron et al., 1999).

Evaluation of State-Funded, Employer-Based Training

Every state government has various forms of training programs. For the last

decade more state governments have actively involved state-funded, employer-based

trainings aiming to improve a state’s economic development. State governments have

attempted to develop a state’s economy through state-funded, employer-based training

programs, which provide a competitive workforce for supporting: (1) recruitment of

business and industry from other regions, (2) retention, revitalization, or expansion of

existing business and industry, and (3) development of new business and industry (GAO,

20
2004; Grubb & Stern, 1989; Hodson et al., 1992; Moore et al., 2003; Regional

Technology Strategies, 1999). Not only in the United States, but in many other

developing countries in Asia, South America, and Africa, local and federal governments

have developed community colleges as workforce training agencies with the cooperation

of business industry, international association, and foreign agencies modeling U.S.

community college system (McMurtrie, 2001).

Because of the nature of state-funded programs, most state-funded training

programs have been evaluated by various local and federal governments and independent

researchers. Their evaluations also have been conducted based on legislation and policy

perspective as well as economic perspectives, and/or mixed perspectives. Many

governmental reports alert that most state-funded, employer-focused training programs

need more in-depth of evaluation in terms of their impact at the firm level, the employer’s

reason for selecting training providers, and linkage with other state and federal higher

education programs (GAO, 2004; Regional Technology Strategies, 1999). Although most

state governments implemented state-funded training programs, the quality of evaluation

of those programs varies from state by state.

The National Governors’ Association developed a report based on 47 state

governments’ responses about state-funded, employer-focused training programs. One of

the results of the study found that there were weak linkages between state-funded,

employer-focused training and education at state-supported higher educational

institutions, though states’ interests were increasing to connect both entities (Regional

Technology Strategies, 1999).

21
One state with the best practices, California, showed that major training providers

are community colleges. While few states require independent, outside evaluations, the

California legislation requires a regular training outcome evaluation. Their required

evaluation is designed to measure pre-12 months of wages and post-12 months of wages

of training participants (Regional Technology Strategies, 1999).

The United States General Accountability Office (GAO, 2004) conducted a

survey of 23 states regarding their state-funded, employer-focused training programs.

This report also concluded similar results with a previous survey in 1999. The report

stated that all 23 states assessed their training programs in 2002, but none have

implemented sufficiently extensive evaluation methods to see the impact of training in

terms employee’s wages or company’s earnings. In 41 percent of the states, internal state

government staff evaluates the programs, while only four percent of the states hire

external evaluators. Fifty four percent of states implemented a combination of internal

and external evaluation.

Moore, Blake, Phillips, and McConaughy (2003) conducted an extensive

evaluation study on California’s Employment Training Panel programs. They selected a

sample purposefully in order to capture a wide range of business types and sizes, and

used both qualitative and quantitative evaluation methods. They measured the quality of

training through assessing the quality of instructors, training materials, and customization.

They applied Kirkpatrick’s four levels of evaluation to assess learning from training.

Management reinforcement and institutionalization were also measured. The study

developed a very interesting framework for measuring the value of training impact. The

22
framework represents that the value of potential gains realized from training is a function

of potential gains, quality of training, and management reinforcement of training.

As a quantitative approach, they measured the number of employees and their

wages. The growth in the number of employees was calculated compared to growth on

average of the same industry, and the growth in total wages per employees was also

calculated compared to average wage growth in the same industry. One of the rationales

for measuring them is because they are only uniform data available cross companies

(Moore et al., 2003).

Measurement of Organizational Performance

As identified in the previous section, most evaluation of training impact tend to

measure learning pre and post training intervention. Self-reporting methodology has

predominated over behavioral observations (Phillips, 1990), and empirical studies of

training impacts on organizational performance have been rare. Therefore, human

resource development researchers have measured training impact in terms of individual

levels of learning and change behavior.

Measures by Evaluation Level

As reviewed earlier in the evaluation of training section, Kirkpatrick’s evaluation

model with four levels has been one of the most popular models in training evaluation.

Hence, many researchers have developed measures to assess training outcomes based on

the Kirkpatrick’s evaluation model (Alliger et al., 1997; Warr & Bunce, 1995). A recent

report of the ASTD described current practices of companies’ training evaluation. It

reported that 78 percent of companies use reaction measures, 32 percent use learning

23
measures, and less than 10 percent of companies assess measures of behavioral change or

organizational results (Van Buren & Erskine, 2002).

Trainees’ reactions, the level one in Kirkpatrick’s model, have been measured by

trainee’s enjoyment or satisfaction. Alliger et al (1997) asserted that enjoyment and

perceived usefulness should be differentiated and that both should be used to measure

trainees’ reaction. Warr and Allan (1999) further developed measurement in this reaction

level and found that differentiated measures are more closely related to learning

outcomes than traditional measures. They used trainee’s enjoyment, perceived usefulness,

and perceived difficulties to measure trainees’ reactions. They additionally measured

trainees’ pre-motivation and confidence level. One other recent study added two more

measurements such as post-transfer utility reactions and transfer climate reactions to the

above measures in assessing reaction level (Sekowski, 2002).

To assess the second level of Kirkpatrick’s model, the amount of acquisition of

knowledge is generally used. The amount of learning from the training usually is

measured immediately after the program. More recent studies asserted trainees’ perceived

value toward equipment and instructors should be measured as well as acquired

knowledge (Kraiger et al., 1993).

In addition to measuring the amount of learning, training researchers have

investigated how an individual’s self efficacy is critically associated with higher learning

efficiency. Learning outcomes are separately measured based on types of learning such as

cognitive learning, skill-based learning, and affective (or attitudinal) learning outcomes

(Sekowski, 2002).

24
Behavior change or transfer, the third level in the Kirkpatrick’s model, is usually

measured based on supervisor’s observation or trainee’s self-reporting at the pre and post

training points of time. Level two and level three of the Kirkpatrick’s model have been

most frequently used to measure training outcome by practitioners at numerous

companies (Goldberg & Ramos, 2003).

Measures in all three levels are believed to measure training impacts consistently.

Warr and Allan (1999) studied 23 two-day training courses attended by motor-vehicle

technicians. All participants completed tests and questionnaires pre-training and post-

training. The results of the study found that the measures of the first level have

statistically significant relationships with the measures in the second level and in the third

level. For example, for those who enjoyed the course and perceived utility, presented

higher learning efficiency and behavior changes.

Leach and Liu (2003) also found that there are positive relationships among

reactions, knowledge acquisition, and behavior change. They asserted that those who had

constructive reactions to training have some potential to learn materials, and those who

with a higher level of knowledge retention were more likely to apply it in their work. The

result of their study found that measuring level two is most related to the results of

measuring level four. Thus, if companies wanted to measure organizational level of

outcomes, assessing knowledge transfer is the best indicator among Kirkpatrick’s level of

evaluations.

The results or business impacts has been considered to most difficult measure

since not a single event affects them, though level four outcomes are considered most

tangible (Goldberg & Ramos, 2003; Lupton et al., 1999; Warr & Bunce, 1995). However,

25
researchers have made efforts to identify relatively reliable measurements such as sales,

productivity, cost, quality, and turnover rate. The following section will describe what

measures have been identified to assess the fourth level of organizational outcomes.

Financial Measures

Business and economic scholars have frequently used financial measures to assess

training impact. However, most studies on returns from training have focused on

individual returns to training (Jacobs & Washington, 2003). Thus, an individual’s wage

has been one of the most popular measures in assessing training impact, in particular,

from the economics perspective (Lynch, 1992).

While economists use individual earnings as a measure of training outcome at the

individual level, other researchers use aggregation of individual earnings as a measure of

training outcome at the organizational level. Moore, Blake, Phillips, and McConaughy

(2003) applied individual wages in measuring organizational performance in their

evaluation study of California state-funded, employer-focused training programs. They

measured change of total wages per employee for all the employees which included

training participants as well as non-participants. Their assumption was that if training

influenced increased organizational performance, total wages of all the employees should

increase overall.

Numerous studies on measuring performance have used many other financial

measures such as growth, earnings, return of investments, or unit costs, though few

studies have applied the financial measures on assessing organizational performance as

training outcome (Eccles, 1991).

26
Growth is one of the conventional measures to indicate organizational

performance. Growth represents growth in revenue and/or growth in production.

Although growth itself does not promise increased income, it is set as many

organizations’ targeting objectives. Because growth is closely related to the economies of

scale in many industries, growth in revenue and/or in production directly relate to lower

unit costs and higher return. In addition, if companies’ short-term goal is to increase

revenue, growth becomes the major measure for organizational performance (Kalleberg

& Van Buren, 1996).

Earning is another major measure to indicate organizational performance. Either

operational earnings or net income is measured depending on what is chosen to measure.

If operational efficiency is to be measured, operational earnings is appropriate, and if not

only operational efficiency but also efficiency of financial decisions are to be measured,

net income is more appropriate. Operational earnings are equal to subtracting operating

expenses, depreciation and amortization from the total revenue. Operating expenses

include production costs and overhead expenses such as energy, maintenance, human

resources, marketing, and any other supporting costs. Net income is equal to subtracting

financial expenses (loss or earnings) from operating earnings. Financial expenses include

capital investment, lease rental, insurance, and interests (Damodaran, 1999).

Non-financial Measures

Recent studies on company practices that measure performance have started to

use non-financial measures such as organizational commitment, customer satisfaction,

employee satisfaction, market share, product quality, retention, and turn-over rate. These

non-financial measures are quantitative or at least quantifiable.

27
Since there are relatively few reliable measures available to assess performance of

professional or office workers, researchers have tried to identify indicators or factors

which have a positive relationship with their performance. They include job satisfaction,

turnover rate, absenteeism, and organizational commitment (Leach & Liu, 2003; Lincoln

& Kalleberg, 1990; Mathieu & Zajac, 1990).

Eaton (2003) investigated the relationship between flexibility policies on

organizational performance by using survey data of professional and technical workers in

seven biopharmaceutical companies. Although her work does not assess results of

training impact, it shows measure of organizational performance using organizational

commitment. The flexibility was measured by three levels: formal, informal, and usable

policies. The organizational performance was measured by self-reported productivity and

organizational commitment. The results of the study found that usable flexibility policy

has a statistically significant association with organizational commitment, specially for

small companies. She used an organizational scale drawn from Lincoln and Kalleberg

(1990).

Leach and Liu (2003) assessed organizational level of training outcome in

salesperson’s training by asking trainees to state perceived efficacy of the training

program to reach organizational commitment and sales effectiveness and to improve

customer relations. Because salespeople work on a commission base, they are believed to

be relatively sensitive in time spending and time-value oriented. Thus, the authors

considered self-reporting as relatively accurate.

Pauley (2001) described a partnership training program among community

colleges, a couple of state government’s departments, public training center, vendors, and

28
local chamber of commerce for client organizations. In this study, Pauley assessed

organizational level of training outcomes by asking employers to report employee’s

retention and change in productivity contrasting training hours and costs.

Studies in this area also attempted to prove these non-financial measures as a

predictor of organization performance, in particular, as long-term financial performance

(Anderson et al., 1994; Banker et al., 2000; Ittner & Larcker, 1998). However, the results

of those studies are not consistent. For example, one survey study analyzed customer and

business unit data for two service firms. It found that customer satisfaction measures are

positively related to future financial returns. However, the results of their cross-sectional

data analysis did not show any consistent associations between customer satisfaction and

market returns (Ittner & Larcker, 1998). Another study analyzed 77 Swedish firms from

various industries and showed that customer satisfaction is positively related with

concurrent returns on investment. But it showed that there are negative or weak positive

relations found in service industries (Anderson et al., 1994).

However, a more recent study asserted that those inconsistencies are due to an

inappropriate time lag between customer satisfaction measured and financial returns

measured. Banker, Potter, and Srinivasan (2000) analyzed 18 managed properties of a

hotel chain data to determine the relationship between customer satisfaction and long

term financial performance. The results of the study found a significant association

between customer satisfaction and financial performance measured by business unit

revenue and operating profit. They described that a single measure of customer

satisfaction is just as effective as a combined complex measure. They asserted that the

29
time lag should be shorter if the study is applied in service sector, and that the time lag

should be longer if it is applied in manufacturing industries.

One very extensive training evaluation study used change in the number of total

employees as an organizational performance measure. Considering downsizing and

efficient operation as one of the representative organizational competencies, the increased

number of employees might not be a good measure to represent companies’ performance.

However, the authors asserted that the increased number of employees compared to the

average increase at the same industry could represent company’s viable growth and

increased performance. They believed that greater growth in employee size could be a

proxy for company success. They used the percentage change to the total number of

employees (Moore et al., 2003).

University-Industry Partnership Training

This section is divided into three parts of review of literature on university-

industry partnership and partnership training. The first part reviews literature on

background, definition, and forms of university-industry partnership. The second part

reviews empirical research on university-industry partnership in both organizational and

employee levels of impact. The third part reviews literature on rationale of partnership

training and forms of relationship between client organizations and universities.

University-Industry Partnerships

University-industry partnerships have been believed to be beneficial to both

industry and university in various areas. That is the reason why university-industry

partnerships have been implemented in a wide range of communities, and why industry’s

30
participation in partnership programs has increased. Before looking at the current practice

of university-industry partnership, a brief history of partnerships and their motivation will

be discussed.

During the 1920s and 1930s, private foundations were the dominant external

source of funds for university research. At the time, those funds went into basic research

with goal of benefiting mankind. After World War II, university research partnership

arose out of need for intensive research requested by the federal government. The

Department of Defense, the Atomic Energy Commission, and NASA were major

supporters of research at universities. By that time, the value from business interests was

also realized among universities, though industrial supports grew gradually (Bowie,

1994).

In the late 1970s and early 1980s financial support to universities—usually for

research & development (R&D)—from the federal and state governments started to

decrease, and universities began to look for other financial sources from business, which

was encouraged by the governments. Now university-industry partnerships have become

increasingly common and intensively implemented (Bowie, 1994; Campbell & Slaughter,

1999).

Besides financial challenges, there were several other pressures that encouraged

the promotion of partnerships. Several significant environmental trends such as the

political-economical order of countries, international competition over high technology,

and the rise of the ‘knowledge society’ stimulated university-industry relationship

(Starbuck, 2001). After the post-industrial society, knowledge became increasingly

important to achieve competitive advantages for industry. This shift motivated business

31
and industry to consider universities as their source of knowledge. Thus, industry’s

sponsorship to universities has taken the form of partnerships with universities (Santoro

& Betts, 2002).

Purpose and Definition of University-Industry Partnership

There are many different motives to participate in university-industry partnerships. A

survey sponsored by the National Science Foundation showed that the reasons companies

interacted with universities are:

(1) to access to manpower, (2) to obtain technology and scientific knowledge, (3)

to provide general technical excellence, (4) to access to university facilities, (5) to

obtain prestige or enhance company’s image, and 6) to solve a particular problem

or get specific information unavailable elsewhere (Powers & Powers, 1988a)

The reasons from universities side are:

(1) to access a new source of money and to diversify university’s funding base,

(2) to provide students exposure to real-world research problems, (3) to work on

an intellectually challenging research program, (4) to access company’s research

facilities and research data, and (5) to provide better training for increasing

number of graduates going to industry (Powers & Powers, 1988a).

However, the ultimate goal beyond these motives is to improve practice. Thus,

university-industry partnership can be defined as “working together for improving

practice” (Jacobs, 1999; Starbuck, 2001). When university and industry conduct

university-industry partnership projects, they have their own immediate goals as

discussed above. Hence, a university-industry partnership might have dual goals: industry

32
wants to solve immediate organizational problems or issues while faculty (university)

wants to extend the body of knowledge in the research area.

Two major questions are why industry wants to solve their organizational

problems and why researchers want to generate new knowledge. Industry wants to

improve practice by solving business problems. Researchers want to improve practice by

building new knowledge to apply practice. Jacobs (1999) defined HRD partnership

research as “the process of improving HRD practice through research” (p.874). The

ultimate goal of university-industry partnership is to improve practice, whether it is

through research, training, or services. Starbuck (2001) also defined the university-

industry partnership as “working together toward a common goal” (p.2), and the common

goal is to improve practice.

Forms of University-Industry Partnership

University-industry partnerships are delivered in a number of forms and vary in

intensity of collaboration and scale of intervention. The most conventional and common

partnership is ‘the university-based, team-based research project,’ which allows

businesses to solve their problems or issues (Roessner et al., 1998).

Relatively recently training and education is another major purpose of university-

industry partnership projects. Under the workforce development university-industry

partnership, there are many forms of training, such as management education,

certification programs, tech prep, internship or apprenticeships, work-based learning,

placement, college curriculum development partnerships, consortium programs, teaching

company schemes, the placing of full-time management students on in-company projects

to act as a consultants and analysts, sponsoring and facilitation applied research programs

33
in management development, and developing a ‘learning organization’ through

partnership (Bailey, 1995; Ellis & Moon, 1998; Keithley & Redman, 1997; Pearce, 1999;

Roever, 2000; Thacker, 2002; Wells, 1999). Recently the traditional ‘customer-supplier

model’ in management development has been replaced by a ‘learning partnership model’

involving a mixture of learning, consultancy, and research (Keithley & Redman, 1997).

Empirical Work in Partnership Training

University-industry partnerships have contributed to organizational development

and employee development in numerous ways. As a result, it is widely believed that

university-industry partnerships are beneficial to industry. Since it is known that the

primary goal of private companies is ‘profit maximization,’ the increasing number of

university-industry partnership projects can be one of the positive indicators that

university-industry partnerships are beneficial to industry.

In addition, a number of workforce development and business scholars have

examined the impact of university-industry partnership outcomes to identify benefits of

the university-industry partnerships to industry. How university-industry partnerships

have contributed to the organizational development level and employee development

level are discussed in a review of evidence from the following case studies.

Impact on Organizational Development

Learning Organization through Knowledge/Technology Transfer. One of the

biggest advantages from university-industry partnerships is that organizations can engage

in continuous learning and gain new technology and knowledge for their continuous

improvement. Several studies documented the role of university-industry partnerships in

knowledge transfer. For example, technology research outcomes transferred from

34
universities to industry contributed 38 billion dollars to the economy, creating over

300,000 jobs and forming hundreds of new companies (Hall & Scott, 2001). Another

study found that technology and knowledge are transferred from universities to industries

not only from the research partnership projects but also from the training and education

partnership projects. university-industry partnerships also assisted companies to recruit an

appropriately trained workforce who can increase technology transfer (Ryan & Heim,

1997).

Organizational Change. University-industry partnership training programs enable

participants to view organizations from a more objective point of view and to initiate

organizational change as change agents. For example, all the Company Associate

Partnership Scheme (CAPS) projects were established as change projects, and

participants perceived themselves as change agents. CAPS was established by a

partnership between British Small and Medium Enterprises (SME), recent graduates,

academic support from a local college and Leeds Metropolitan University in 1993.

During the program, even those who were resistors of change could understand the need

of and advantages of change, and those who supported change understood the resistor’s

fear and minds. Thus, their graduates could help SMEs to change their organization and

their organizational culture (Gold et al., 1998).

Public sector scholars also thought that universities were uniquely positioned to

play a role which encourages individuals and their organizations to critically challenge

their ways of working and thinking. Thus, in the 1990s when the public sectors faced

requests to become more effective organizations, Mavin and Bryans (2000) described

mutually beneficial outcomes from university-industry partnerships which could result in

35
the development of continuous learning, culture change, organizational development

initiatives, and appropriate restructuring.

Qualified Workforce and Professional Development. Companies can recruit an

appropriately trained workforce and stimulate current employees’ professional

development through university-industry partnerships. Companies often recruit qualified

workforce and develop their participating employees’ skills and knowledge not only from

training partnership programs but also even from research partnership projects (Ellis &

Moon, 1998; Roever, 2000; Ryan & Heim, 1997).

Private sector companies were not the only entities to value external training

providers. Government agencies also considered universities as external providers of

training and development, reflecting the need for continuous professional development,

for flexibility, and for continuous adaptability to change (Mavin & Bryans, 2000). Even

universities such as Saint Mary’s University also developed international management

programs—sponsored by the Canadian Chamber of Commerce in Hong Kong and with

assistance of Hong Kong five star Hotels—and educated international mangers with

global perspectives (Chan, 1994).

Impact on Employee Development

Some HRD scholars have asserted to there is a relationship between individual

development and organizational performance (Jacobs & Washington, 2003). Even though

there are not many studies that have proved a clear linkage between employee

development and organizational development, employee development has been described

as having positive relationship with organizational development.

36
Compared to traditional education, participants in university-industry partnership

training programs experience several advantages. The partnership training programs

gives participants

“more convenient time and places to study, less worry about finances, clearer

understanding of the connections between theory and applications, greater contact

with other students who have similar career objectives, more chances to practice

classroom skills on the job, less concern about jobs after graduation, and greater

access to instructors who are current in their fields (Aslanian, 1988).”

Although university-industry training partnership participants are not employed

during the training period, their placement rates are very high. For example, placement

rates in the Ben Franklin Programs, technology training education programs through the

partnership between the Behrend College and a consortium of several small companies

with common interests in plastic and material technology, were almost at 100 percent

(Ryan & Heim, 1997).

In addition, individual participants gain more applicable knowledge and training

through university-industry training partnership. The HRD partnership program between

East Tennessee State University and Sprint adopted the open system approach to

education based on problem-based learning, which is a form of education that combines

discipline knowledge with a focus on solving problems (Yasin et al., 2000). Education of

applicable knowledge and appropriate training ultimately increases individuals’

performance, job satisfaction, and their employment status at their workplace (Johnstone,

1994).

37
Although there are a few empirical studies found to examine impact of university-

industry partnership on workforce development, especially from a labor economics point

of view, case studies have been conducted to examine if university-industry partnerships

assist improving workforce development outcomes either for individuals or for

organizations.

New York Telephone Company and Empire State College. In 1991, the State

University of New York’s Empire State College and New York Telephone Company

established a partnership program, the Corporate/College Program, which enables

company employees to earn associate or baccalaureate degrees while working full-time.

Empire State College is a ‘nontraditional’ institution for adults, offering a full-range of

short-course training and tailored programs. This partnership occurred because the New

York Telephone Company needed to develop functional workplace competencies since a

competitive market and customers’ needs required excellent service using state-of-the-art

technology. With the assistance of the New York City public schools, Empire State

College and New York Telephone recruited one hundred senior high school graduates

who were willing to go to college but could not afford it because 92 percent of them

came from minority groups. In addition, 30 current employees and 20 from outside the

company were also added to this program.

New York Telephone’s financial commitment was matched by Empire State

College’s program flexibility and capacity for partnership program. Nine months before

the students’ enrollment, academic programs started to be designed. Besides a flexible

program schedule to meet working condition, the Corporate/College Program

implemented unique programs to support current students need such as a ‘collegiate

38
seminar’ for young students and ‘intense personal mentoring’ program. While the

average attrition rate in the community colleges in New York City was 50 percent at the

end of first semester and 80 percent by the end of one year, 90 percent of the

corporate/college program students completed the first semester and over 70 percent of

students started the second year.

It was too early to measure the long-term financial return from this partnership

project to New York Telephone, but a positive indicator for productivity appeared. The

younger students, those who were hired right after high-school graduation and

participated in the partnership program were among the best salespeople in every office

and often outperformed their senior colleagues. From the evaluation, students expressed

their increase in self-esteem and competence. In addition, other employees’ interests in

further education also increased due to this program (Johnstone, 1994). This case study

showed the impact of partnership programs on workforce development primarily at the

individual level, but those individuals’ high performance ultimately is linked to

organizational performance.

DuPont Corporation and Penn State University. DuPont and Penn State have a

longstanding relationship because of a mutual interest in total quality management

(TQM). DuPont has adopted TQM as the cornerstone to restructuring the company

following downsizing and reengineering. The partnership developed a TQM model for

DuPont and facilitated adaptation of the model at DuPont. From this partnership project,

DuPont and Penn State established an additional communication channel that enabled

direct linkage between university and corporate officials in various strategic areas such as

39
human resource development and continuing education, technology transfer, and the

advancement of the TQM relationship.

While conducting reengineering projects, DuPont looked to acquire relevant

technology from sources outside of the company. Since the director of technology

acquisition at DuPont and the director of the Industrial Research Office at Penn State

established a personal and professional relationship from the previous partnership,

DuPont and Penn State were able to successfully launch research and technology

development projects in advanced materials, housing and construction materials research,

business, and biotechnology.

Penn State also developed a relationship with Forum, Inc.—DuPont’s training

service provider—and jointly developed comprehensive training programs as a response

to the DuPont’s training needs. DuPont team members were also involved in an

innovative manufacturing engineering program that would revise the engineering

curriculum to emphasize the interdependency of design in a business environment (Ryan

& Heim, 1997). This case study showed that initially developed partnering relationship

impacted other business areas such as human resource development, research

development, and curriculum design to gain mutual benefits.

Cummins Engine Company and Teesside Business School in UK. Cummins, one

of the major players in diesel-engine manufacturing in the world, faced severe global

competition from companies in Japan, so Cummins launched a corporate initiative—the

Cummins production systems (CPS). CPS led to Cummins re-evaluating its management

activities and found that new management skills would be required to support CPS at all

levels.

40
The partnership program designed several part-time, integrated programs with a

Management Certificate, a Management Diploma, or a Master’s in Business (MBA).

Most courses were offered as a work-based learning approach and were comprised of

monthly residential and open-learning sessions for 24 months. Most managers were

encouraged to develop their own personal development plans to implement CPS, and

courses were delivered by a combination of Cummins’ staff, outside consultants, as well

as Teesside Business School faculty members. Although programs were validated by the

university-Cummins panel, there were several quality control mechanisms.

Following the success of the initial programs for supervisory staff, another

partnership program for middle managers was introduced with a broader range of

subjects such as project management, continuous improvement, finance, human resource

development, and so on. This middle manager program was also linked to the university

validated qualifications.

This case showed the impact of workforce development outcomes for both the

organization and individuals. First, because Cummins-Business School programs

identified the underpinning knowledge and skills needed by middle managers and

subsequently incorporated them in the courses, Cummins had a tailored management

training program. Second, Cummins were able to train their managers to change

managerial attitudes and behavior that the new culture required under CPS. Third,

because participants were able to gain some recognition in terms of nationally validated

university qualifications while working and attending in-company programs, individuals

earned not only professional knowledge for their career but also an academic credential

(Keithley & Redman, 1997). Overall, these three cases showed that university-industry

41
partnerships assisted companies’ employee development, organizational development,

and improving organizational performance.

Partnership Training

Under the traditional employment system, companies developed complicated,

internal labor markets that supported a higher quality workforce via their own trainings.

However, as investment in training was getting costly, only larger companies can retain

comprehensive training centers to assess, train, and evaluate their workforce, and more

companies face with the decisions about ‘make or buy’ training. Although moving

training activities outside the organization’s boundaries provide several disadvantages,

one of the major trends that arose in organizations in the 1990s was to outsource training

functions (Knoke, 1997; Osterman, 1995; Stewart, 1999).

An empirical study indicated that most American organizations utilized outside

training providers for their employee training. A1991 national survey showed indicated

that only about five percent of organizations developed and implemented their training

functions with their own staff and/or parent organization staff. Twenty-three percent of

organizations utilized only external training providers, but this percentage was higher in

smaller organizations. Thirty-five percent of organizations have their own training staff

as well as paying outside training providers. The remaining 37 percent of organizations

have their own training staff, a parent organization’s training staff, and outside training

providers. This survey showed that most organizations collaborate with external training

providers for their employee training, though organizations maintain their own training

staff (Knoke, 1997).

42
Knoke and Janowiec-Kurle (1999) described three kinds of training modes

utilizing external training providers: (1) the company asks employees to obtain training,

individual employees choose and attend training programs from external training

providers, and the company pays the cost, (2) the company staff set training

goals/objectives, choose external training providers, and ask them to conduct the entire

training program, or (3) the company staff and external training providers collaborate to

develop, design, conduct, and/or evaluate the entire training programs.

Why Outsource?

It is evident that the choice between internal and external training providers

depends on the budget and costs. That is true in many cases not only for private

companies but also for public sectors as well. For instance, after their intensive study on

outsourcing training, the Center for Naval Analysis recommended that the Navy engage

in outsourcing training from community colleges (Golfin et al., 1998).

However, there are much more complex issues beyond the cost and benefit issue.

There are tradeoffs to using internal or external staff. The key advantages of using

internal training staff as internal trainers are following. Internal trainers: (1) understand

the company’s culture and organizational characteristics, (2) are perceived as colleagues

by the trainees, and (3) are seen as having more influence with the company management.

In contrary, external training providers: (1) bring knowledge of other organizations, (2)

are perceived as more expensive so their opinions are perceived as more valuable, and (3)

are seen as objective (Hardingham, 1996).

A variety of external training providers offer a range of training from highly

sophisticated, specialized training to generic, basic skill trainings. Some of the training

43
providers are non-profit entities such as voc-tech schools, and community-based learning

centers, while others are commercial training providers. Community colleges and

universities can be in either of these two groups.

There are contradictory results in studies about training effectiveness. Researchers

on partnership training assert that partnership training between internal staff and external

training providers brings far more benefits to the company than either utilizing only

internal training staff or external training providers (Hardingham, 1996). On the other

hand, Moore (2003) asserted that internal staff implemented more effective training

programs than external training providers.

Nature of the Relationships with Training Providers

To form an effective relationship with outside training providers is as important as

selecting an appropriate outside training provider. This section discusses effective

relationships with outside training providers. A company may have a formal relationship

with a formal contract and exchange companies’ strategy with outside partners or may

have just informal relationship without a contract. However, formal partnership contracts

have usually involved a contract to provide training between community colleges and

business organizations. Many researchers asserted that a formal contract is necessary for

training programs to produce a good quality of training outcomes (Hawley, 2003b; Kenis

& Knoke, 2002).

By having formal contracts, several training conditions including training

programs, training contents, expected outcomes, and trainers’ or trainees’ pre-requisites

can be more explicit. In addition, a formal relationship represents managements’ level of

involvement and commitment. Because many studies have proved that management’s

44
support is one of the strongest factors for success of training programs, a formal

relationship has been considered as a critical factor to have successful implementation.

For example, O’Rear (2002) found from her study that business leaders’ support for

training in the training design and evaluation process produced significant relationship

between training and job performance

Training Needs Assessment

This section is divided into three parts of review of literature on needs assessment

and training needs assessment. The first part describes definition of needs assessment and

training needs assessment. The second part reviews research on ways of conducting

training needs assessment. The last part reviews the roles of needs assessment

Definition of Needs Assessment and Training Needs Assessment

It is widely believed that needs assessment is one of the key phases in the training

design process to enhance training effectiveness. Needs assessment assists in identifying

a gap or discrepancy between an ideal level of performance and current level of

performance, and to prioritize current resources to reduce those gaps (Altschuld, 2003).

Needs assessment can be defined as “a systematic set of procedures undertaken for the

purpose of setting priorities and making decisions about program or organizational

improvement and allocation of resources. The priorities are based on identified needs”

(Witkin & Altschuld, 1995, p. 4).

Training needs assessment also can be defined as an ongoing process to gather

information to identify training needs so that training can be developed to assist

organizations to meet their objectives (Brown, 2002). Thus, it ultimately helps in

45
developing need-based training programs either to meet organizational needs and/or

individual needs for training. Therefore, Brown (2002) asserted that training needs

assessment is essential to the success of training programs.

Although numerous studies exist regarding the ways to do needs assessment, not

many studies are can be found to test the impact of needs assessment on training outcome

or the relationship between quality of needs assessment and training outcomes. In this

section, relevant literature on training needs assessment will be reviewed.

Ways of Conducting Needs Assessment

One of the major topics discussed most frequently in needs assessment are data

gathering techniques. They include observation, survey questionnaires, subject matter

expertise’ consultants, interview, group discussion, tests, review or reference of relevant

documents, and work samples (Goldstein, 1993). Witkin and Altschuld (1995)

categorized needs assessment data gathering methods into three types based on data

sources such as archival data, noninteractive and interactive communication, and analytic

data. Archival data includes records, logs, demographic data, census data,

epidemiological studies, and social indicators. Non-interactive communication data

sources come from written questionnaires, key information interviews, or critical incident

technique. Interactive communication data comes from nominal group techniques, focus

group interviews, or futures scenarios. Analytical data can be gathered from fishboning,

success mapping, task analysis, fault tree analysis, risk analysis, or trend analysis.

McClelland published a series of articles regarding data gathering methods,

specifically for training needs assessment such as survey questionnaires (McClelland,

1994a), individual interviews (McClellan, 1994b), focus group interviews (McClelland,

46
1994c), and on-site observation (McClelland, 1994d). His articles provided weaknesses

and strengths for each data gathering method.

Because circumstances, timeline, budget, and other environmental and political

factors influence the quality of data gathering practice, many techniques have been

introduced to identify the most appropriate techniques given limited resources. Many

studies described the advantages and disadvantages of different techniques for data

gathering. These studies are briefly reviewed.

Archival data can be used to identify training needs include production records,

defect records, accident logs, social indicators, and other data related to the training area.

The archival data are mainly quantitative which assists in determining the current status

of a problem or training needs. It is relatively easy to gather this type of data, but many

times provides only limited information related to training needs (Witkin & Altschuld,

1995).

Survey methods are the most frequently used methods in training needs

assessment. Surveys are relatively simple to administer to a large number of people.

Surveys can collect a large amount of information from many people and can minimize

some forms of bias. It also has many statistical inference methods to analyze these

collected data (McClelland, 1994a). Because surveys are very structured, identified needs

are uniformly collected but mainly the data is mostly qualitative such as perceptions,

opinions, judgment, or based on observation’s of the respondents (Witkin & Altschuld,

1995).

On the other hand, surveys also have some disadvantages. Surveys might take a

longer period of time to collect the data, even though it may be administered through

47
internet or email. Surveys may provide unclear, often subjective results, but it is difficult

to ask follow-up questions to participants (Gray et al., 1997). In addition, people can

easily provide an expected or socially accepted answer rather than their true opinion

(Moore et al., 1987).

The interview is another common method in training needs assessment.

Interviewing includes key consultation with those who understand the training needs for a

group or organization, individual interviews with persons who will participate in training

sessions, and focus group interviews with individuals who are knowledgeable regarding

training needs (Miller & Hustedde, 1987). Interviews provide ample opportunities for

respondents to express their opinions and feelings more completely than other methods.

In particular, those opinions and judgments can be from experts if they interview

experienced experts in specific areas (Witkin & Altschuld, 1995).

Unlike other methods focus group interviews allow the group to reach a certain

level of consensus for potential solution generation. Focus group interviews and various

group procedures also uniquely allow follow-up discussion to inquire further regarding

the rationale for suggestions or the genesis of ideas in order to refine the expressions

through discussion with others (Eastomond, 1994). Group techniques additionally play a

function of informing people about what taking place, thereby encouraging their support

for the efforts (Rossett, 1987).

Because various group techniques are also able to gain unexpected results or ideas

regarding training needs, and because they serve various purpose of training needs

assessment, they, like focus group interviews, are one of the most widely used techniques

for gathering information regarding training needs (McClelland, 1994c). The

48
disadvantages of individual interviews and focus group interviews include the amount of

time required and the skills necessary as an interviewer or facilitator of the group

(Goldstein, 1993).

Training needs assessment can be conducted through observation at the work site

by subject matter experts. Although the observation method requires a great amount of

time and the observers should have content and process knowledge, it can provide highly

relevant information to the work setting (Goldstein, 1993). Because relatively few

subjects can be observed, this technique is usually limited to a specific job level

(McClelland, 1994d).

Besides the conventional training needs assessment technique mentioned earlier,

currently, people use a specific model or analytical framework for training needs

assessment. They are fishboning, success mapping, fault tree analysis, and human

resource competency model specific for training needs (Gorman et al., 2003). Using these

frameworks helps identify root causes of the problems or causal relationship for the

organizational problems. However, it requires highly experienced experts in using these

techniques (Witkin & Altschuld, 1995).

Role of the Training Needs Assessment

Another fundamental research area on needs assessment is to identify the roles or

purposes of the needs assessment. Obviously the first purpose of needs assessment is a

gap analysis between desired performance and current measured performance to identify

training needs (Kaufman et al., 1993). However, that is not the only purpose expected.

Many studies found there are more roles of training needs assessment that are beneficial.

49
A study of public organizations stated that the top three roles of training needs

assessment are (1) to introduce new programs, (2) to address performance and

productivity problems, and (3) to link employee performance with organizational goals

(Gray et al., 1997).

Rossett (1987) described six main purposes of training needs assessment. (1) to

seek optimal and actual status to determine detailed discrepancies, (2) to understand

perceptions of real people and in the real world, (3) to look for causes of a problem, (4) to

seek priorities, (5) to make significant parties involved, and (6) to train management in

ways of looking at problems.

Sleezer (1993) described training needs assessment as a process of managing

interactive relationship among three components. They are organizational characteristics,

decision maker characteristics, and analyst characteristics. Training needs assessment is

not a process of searching for the right answer to the organizational problems. It is a

dynamic process, and thus, it enables organizations to negotiate limited resources.

Brown (2002) also described four main roles of training needs assessment. The

first role is to identify specific organizational problems which will guide training

direction. Although individual departments recommend a specific training program to an

HRD department, no one really knows which kind of specific training program solves

specific problems. Thus, training needs assessment should first identify specific problems

in an organization.

The second role is to obtain management support for the training program. In

general, it is not easy for managers to see the actual impact of training programs on their

department or organization’s performances, training needs assessment can help managers

50
see potential results of the training program. The third role is to generate data for

evaluation. Because training needs assessment sets specific goals of the training programs,

those goals can be the criteria when the results of training programs are evaluated. Thus,

training needs assessment also can serve as a basis for evaluating the effectiveness of the

training program.

According to Brown, the fourth role of training needs assessment is to analyze the

costs and benefits of training. A training needs assessment is able to identify potential

benefits from specific training by discrepancy analysis in performance and potential costs

to conduct training. The training costs and benefits analysis assist top management in

making decisions to invest in training based on identified needs.

In summary, few studies were found that address the actual impact of needs

assessment on training. Most existing studies on training needs assessment discuss the

ways to conduct needs assessment, the level or areas to be assessed, and role and purpose

of needs assessment. The appropriate selection of data gathering methods is important,

and appropriate levels and areas assessed is also important, but most of all, how carefully

needs assessment is planned and implemented might be the most critical piece of

information to gather to demonstrate training impact.

Nature of the Training Program

Several characteristics determine the nature of the training program. For example,

whether the training program is job-specific or job-related can determine one aspect of

the nature of the program. Types of the training program and employee levels also

determine other aspect of the nature of the program. This section of the literature review

51
discusses some of those characteristics which determine the nature of the training

program. They are customization and relationship to job.

Customization

The extent of customization is one of the greatest determinants of the nature of the

training program, especially in partnership training contexts. Although in-house training

program developers may customize some standard instructional programs to fit their

organization’s needs, the origin of customization began with outside training providers,

in particular, higher educational institutions in partnership training contexts (Bragg &

Jacobs, 1993).

Since the level of customization varies in training programs, there is no definite

definition in customization or in customized training among scholars. Although Grubb

and Stern (1989) defined customized training as “relatively firm-specific skill training for

individual firms, and therefore, a form of training which is more specifically responsive

to a firm’s requirements than are general vocational programs”(p.31), it does not give

clear idea of the boundaries of customization.

Researchers in the United Kingdom (UK) described three levels of customization.

One extreme is almost a standard training program—so called, a generic training

program—with a very small degree of customization. A customized training program

refers to the middle level of customization, in which programs customize some contents,

examples, sequence, or contexts with a standard program. The other extreme is a tailored

training program, which is newly developed and uniquely structured for the client’s

specific purpose of training (Brown, 1999; Prince, 2002).

52
However, because other researchers (Sole, 1999) use terms such as “customized”

and “tailored” differently, it is hard to find a universal definition of customized training.

For the purpose of this study, all degrees of customized or tailored training programs will

be defined as “customized training programs,” but will only be distinguish by degree of

revision and development.

Customization is not limited in contents, but also includes delivery methods or

course pace (Bragg & Jacobs, 1993). Moore, Blake, Phillips, and McConaughy (2003)

described three customization areas in their training evaluation study. Adjusting training

pace and contents to the level of trainees is one method of customization. Using examples,

tools, and/or materials pulled from the company context is also another method of

customization, and to align with the company’s culture can be considered another method

of customization.

Higher education researchers also agreed that community colleges provided a

wide range of customized training programs, from standard credit-generating college

courses to highly specific non-credit technical and managerial training (Bragg & Jacobs,

1993). However, their condition for customized training requires different components in

contract. The Ohio State Council on Vocational Education (1987) described customized

training as “enter into contractual arrangements with a particular business (or group of

business) for specialized training services” (p.11). Contracts, either written or verbal,

between community colleges and client organizations set development and delivery of

customized training.

53
Motives of Customization

Two kinds of approaches are available to investigate benefits or impacts of

customized training. The first approach is to investigate reasons or motives to adopt

customization, and another approach is to assess real impacts or perceived benefits from

customized training.

Basically, companies look for customized training since it is believed to be

designed to be applied immediately to their employee’s jobs. They believe that

customized training programs offer occupational or firm-specific skills so that

participants acquire a specific job skill or knowledge to be used in the everyday

workplace. Thus, it is expected to have an immediate impact on employee’s work

performance (Clarke, 1984).

There are various increasing demands for customized training. Advanced

technology, rapid change in high tech, and spread of innovation require continuing

training, and this increased demand has caused tremendous growth in customized training

opportunities (Hodson et al, 1992; Parrish, 1998). Parrish (1998) reported that the

application of technology is getting sophisticated and requires a unique training program

to apply it in a specific business unit.

Second, companies face diverse training needs to solve complicated

organizational issues. For example, one article reported that customized training is

necessary for diversity management training for mangers and training for women and

ethnic minority groups to enter senior positions (Rajan & Harris, 2003). Another article

regarding future leadership development described that individuals with leadership

potential should be provided with appropriate just-in-time tools such as customized

54
training (Dare, 1999). Many business schools also experienced dramatic growth in

demand for customized development programs (Brown, 1999).

Finally, customized training has surfaced as a significant component of

community college education since, in particular, community colleges have expanded

their involvement in economic development in the region (Bragg & Jacobs, 1993). One

national survey showed that 93 percent of community colleges engaged in customized

training by the late 1980s (Lynch et al., 1991). Community colleges have undergone

consistent reform in their education and training programs to meet business industry and

community needs. One of the new approaches that community colleges implemented is

customized training (Bragg, 2001).

Advantages and Disadvantages from Customization

Several studies attempted to prove advantages and disadvantages of customized

training. One of the greatest advantages is that customized training assists companies in

training their employees according to their specific needs.

Based on social learning theory, people can learn better from the familiar

materials and easily can apply training when learning materials are similar their actual

materials. Warren (2000) introduced one case study of customized training using

qualitative research methodology with a quantitative component. She examined 42

classes in one collaborative, customized training between a small rural community

college and a large business corporation. Her data collection had three phases: (1)

reviewing written course evaluations, (2) interviewing with participants and participants’

managers, and (3) interviewing with program participants, client company’s

administrators, community college staff as well as participants’ managers. From these

55
three phases of evaluation, she concluded that the customized training was successful

because a new partnership between the community college and the company was formed

for a new four-year degree program. Some courses are now broadcast over interactive

television, and because many follow-up partnership activities appeared after this

customized training program. Most of all, the evaluation indicated that the original

purposes of this customized training program to equip middle managers without college

degree with more knowledge of the company’s long-term missions and plans and to

create more confidence as leaders of other employees was reached. As a bottom line,

Warren asserted that community colleges need to realize that training must be customized

to meet both the company’s and non-traditional learners’ needs.

Brown (1999) conducted a survey with directors and managers in training and

development, and human resources functions in the UK to investigate management

development training programs in terms of level of customization and level of

measurement of learning. He gathered responses from 98 organizations and mostly large

corporations. His survey results showed that a high level of customized training programs

with high level of measurement of learning (credit-based course) offer several advantages

over non-credit, less customized programs. The advantages were

“(1) to induce higher participant energy and motivation level, (2) to encourage

more rigorous learning, (3) to offer higher relevance, (4) to offer opportunities for

team building and culture change, (5) to provide more opportunities for the

application of learning and better support for participants from the employers, and

(6) to make it easy for employers to monitor the quality of training program and

participant’s performance.”

56
Several studies also have reported that the relevance of instructional content as

well as the relevance of instructional design as critical and necessary factors to support

transfer of training. To increase the applicability of training in the workplace, researchers

asserted that training programs should increase similarity of training content (Baldwin &

Ford, 1988; Garavaglia, 1993; Yamnill & McLean, 2001).

For disadvantages, the study described fewer opportunities to exchange ideas with

managers from other companies and less open debates because power relations from the

workplace may be influential on learning environment. It also expressed one concern that

prescribed answers or solutions by the client company’s training developers might limit

the expression of participants’ ideas.

Hodson, Hooks, and Rieble (1992) also conducted a study on customized training

with in-depth interviews with 65 human resource directors, trainers, and training

participants in 20 manufacturing plants. Their study specified three different settings: (1)

large, unionized monopoly sector companies that have developed intensive training

program, (2) smaller, relatively marginal sector companies that use state-supports for

training, and (3) new start-up companies that use training in communication skills and

group processes for specific job skills. The study measured the customized training by

autonomy and task complexity. The study results showed that the first group of large

companies implemented the most intensive training programs and that customized

training helped bringing specific usable skills, network, norms, and trust. However, a

significant limitation of customized training found in this study is that they developed

transferable skills but generally without a credential to identify possession of those skills.

57
Another benefit of customized training is that it may be more cost effective. If the

pure training costs of fully-customized training programs are calculated, it might be more

expensive than standard training or moderated customized training. In one study, training

providers asserted that customized training is more cost effective. Companies might pay

for an employee’s training time for one- or two-days of standard training when they can

teach exactly what they want to teach via a half day of customized training (Sole, 1999).

Relationship to Job

There are two ways to describe the relationship of training programs to jobs. First,

the training programs which are implemented at the company can be categorized as either

job-specific or job-related. It is generally believed that more job-specific training is more

relevant to performance improvement since job-specific training is more practical and

related to current tasks. It has been found that job specific training results in higher

earnings to low-wage workers versus training which is not firm specific (Ahlstrand,

2003).

The type of training is another way to describe the relationship of training

programs to jobs. The type of training is categorized as: 1) technical, 2) managerial, and 3)

non-job related (or awareness) trainings (Jacobs, 2001a). According to Spitzer (1984) and

Laker (1990), technical training is more effectively implemented in designing as near

transfer of training programs versus far transfer of training programs. Because near

transfer can be successful when training place and content reflects the workplace, the

effectiveness of technical training can be believed to be dependent on the extent of

customization (Baldwin & Ford, 1988).

58
On the other hand according to Laker (1990), far transfer would be the objective

of long-term development and management development. Thus, managerial training

programs might be more effective when they are implemented in far transfer of training

design.

Conceptual Framework

Perceptions of training effectiveness are a dependent variable in this study.

Training program effectiveness refers to whether training achieves its intended purpose

or goal, and perceptions of training program effectiveness refer to how a client’s

organization perceived training program effectiveness. Training effectiveness can be

measured either by trainees or by supervisors at an individual level, thus the aggregated

data are often reported as training effectiveness (Brinkerhoff & Gill, 1994; Kirkpatirck,

1996; Ahlstrand et al., 2003). The first hypothesis is, if perceptions of training

effectiveness are believed to evaluate training effectiveness properly, perceptions of

training effectiveness should have a positive relationship with financial performance of

client organizations.

The outcomes of partnership training for client organizations are dependent not

only on the quality of training needs assessment and the nature of the training program,

but on the nature of how business organizations have developed relationships with

training providers. Partnerships among organizations contribute to the quality of the

training through three main factors: the nature of the provider-client relationship,

provider's training needs assessment, and the nature of the training program. Based on the

literature review, the following hypotheses are developed to conduct this study.

59
Formal partnership contracts are hypothesized to create benefits for firms. By

having formal contracts, several training conditions including training programs, training

content, expected outcomes, and trainers’ or trainees’ pre-requisites can be written more

explicitly. In addition, a formal relationship represents management’s level of

involvement and commitment. Because many studies have proved that management’s

support is one of the strongest factors for success of training programs, formal

relationships may be a critical factor to have successful implementation. It can be

hypothesized that training programs with external training providers who have a formal

contract and exchange a company’s strategy are more relevant to training program

effectiveness than with external providers without a formal relationship. In addition, if

the external training providers are involved in more stages of training process, have more

previous contracts, and initiate a follow-up contact, it is hypothesized that training

effectiveness is higher.

While training needs assessment is significant and most training providers

implement some form of training needs assessment, needs assessment does not

automatically guarantee training effectiveness. The level of and quality of actual needs

assessment might increase training effectiveness and its impact on organizational

performance. Thus, we can hypothesize that more levels of focus (among individual,

work process, and function) of provider's training needs assessment increases perceptions

of training effectiveness. In addition, it can be hypothesized that if a needs assessment

identifies performance levels before and after training, the needs assessment results in

greater perceptions of training program effectiveness and higher than when the needs

assessment either identifies performance levels before training or after training.

60
The nature of the training program is also hypothesized to have a relationship

with perceptions of training effectiveness. It is believed that if training programs utilize

the company’s data, format, materials, and/or equipments for instructional materials,

trainees can learn more and can easily apply this learning to their work place (Brown,

1999). Thus, we can hypothesize that the higher level of customization provided among

partnership training programs, the more relevant they are to training program

effectiveness. The level of customization can be defined three ways: (1) generic/standard

training program with little customization, (2) moderate level of revision, and (3) newly

developed training programs. The areas to be customized are components of a lesson plan.

They are training purpose, trainee’s pre-requisites, content, delivery methods, and

evaluation. Thus, it can be hypothesized that more customized training programs increase

perceptions of training program effectiveness. In addition, the job-specific training

programs are hypothesized to have higher perceptions of training effectiveness than the

job-related training programs.

61
Types of Training Provider
• Internal staff
• Community college
• Four-year university
Nature of the Provider-Client • Vocational center
Relationship • Adult learning center
• Level of involvement • Private training vendor
• Follow-up contact
• Previous contract history
• Knowledge of the business
• Formality
Training Program Effectiveness
• Level of training evaluation
Provider’s Training Needs • Relative perception
62

Assessment • Financial performance


• Level of focus
• Prioritize training issue
• Timing of need

Nature of the Training Program


• Extent of customization
• Relationship to job
• Employee level
• Types of training
• Expected outcome

Figure 2.1: Conceptual Framework for the Study of Training Program Characteristics and Training Effectiveness
among Organizations Receiving Services from External Training Providers
CHAPTER 3

METHODOLOGY

This chapter describes the methodology that was used to answer the proposed

research questions. The first section describes the research type, and the second section

describes the research background, study setting, and sample. The third section explains

the measurements such as the operationalization of variables, instrument design and

development, and instrument validity and reliability. The fourth section illustrates data

collection procedures and data analysis.

Research Type

Ex post facto research was mainly used in this study with descriptive analysis to

investigate the impact of the nature of the relationships among client organizations and

external training providers, training needs assessment, and the nature of the program on

training effectiveness and client organization’s performance. Unlike quasi experimental

studies, ex post facto research belongs to descriptive analysis. The main purpose of the

descriptive research is to illustrate “what is.” Thus, while quasi experimental study is

used to identify causal relationship between independent variables and dependent

variables, ex post facto research cannot determine direct cause and effect relationship

(Ary et al., 2002).

63
Ex post facto research is conducted after the variance of the variable occurred in

the natural occurrence, and it means that the variables can be manipulated but are not

manipulated in this research. Since ex post facto research can describe potential

relationship among variables from this respect, it is also called causal-comparative

research. Ex post facto research enables us to predict one variable or variables from the

knowledge of the other variable or variables (Ary et al., 2002). Therefore, in this research,

the independent variables occur naturally and are observed before the dependent

variables are observed. With the knowledge of independent variables, the researcher is

able to predict dependent variables while controlling for potential attribute variables.

(Hair, 1998)

The researcher proposed a conceptual framework in the previous chapter (Figure

2.1) based on the extensive review of literature on training evaluation, partnership

training, training needs assessment, and nature of the training program. The researcher

proposes to investigate relationship among four variables. They are the nature of the

relationships among client organizations and external training providers, training needs

assessment, the nature of the program, and training program effectiveness. Three major

independent variables, the nature of the provider-client relationships, provider's training

needs assessment and the nature of the program, occurred naturally preceding the

dependent variable, training program effectiveness. Therefore, ex post facto research was

the most appropriate method to be used for this study.

64
Research Setting and Sample

Research Setting

There are four criteria to consider for a suitable research setting in this study as

following:

- Because the unit of analysis is a training program, the subject is a company not

an individual.

- In order to be included in the study, companies should have implemented and

completed training programs to increase their organization’s performance.

- Companies should have an extensive understanding of training needs assessment,

customization, and the provider-client relationship

- These sample companies should have variance in types of training providers.

Considering these criteria, the researcher selected companies that received

training funds from the state government. The Ohio Investment in Training Program

(OITP) was established to help businesses in Ohio enhance their organizational

competitiveness and to increase their organization’s performance through trainings. OITP

companies who received training dollars were used for this study.

The Ohio Department of Development (ODOD) manages diverse economic

development programs to attract, create and retain jobs for Ohio businesses. The OITP is

administered by ODOD and the OITP regional training coordinators, who are based at 12

Regional Economic Offices. Their role is to support businesses by providing technical

and financial resources for employer-based training to attract new business and to

maintain and upgrade the skills of Ohio’s workforce. The OITP is one of the few

programs in Ohio that provides direct financial support to employers for training.

65
The mission of OITP is “to support the mission of the State of Ohio, the Ohio

Department of Development through its Economic Development of Department, the

OITP will promote productivity and quality in workforce development by providing

financial and technical assistance to companies and collaborative efforts that strengthen

Ohio’s workforce (OITP, 2002, p.10).”

Employers can request training grant funds through a Request for OITP

Assistance. The OITP regional training coordinators provide no cost assistance in

preparing applications, training budgets, and reimbursement requests. The amount of a

grant is determined by the training activities and costs associated and identified with

training needs. Employers can choose the training provider whether it is a company

employee or an outside training provider.

Although most companies can request the OITP fund for their training, there are

some restrictions. Media and similar businesses in such industries as publishing, motion

picture and sound recording, broadcasting and telecommunication, and information

services and data processing services, are not eligible to obtain the government funds due

to protected constitutional freedom of speech and press. In addition, companies cannot

request any training costs associated with college-credit courses.

The OITP recipient companies have implemented and completed the training

programs for increasing their organizational performance. In addition, they have

consistently made an effort to monitor performance through training. Also, because they

have right to select training providers, there is enough variance among types of training

providers to use this database. Thus, the researcher selected the OITP funds recipient

companies as a research setting for this study.

66
Sample

This study used primary data and secondary data for analysis. The primary data

was gained through survey to the OITP fund recipient companies. The secondary data

was gained through an analysis of OITP database.

Primary data

The population of this study are all companies that received training fund through

the OITP and spent the training fund on their employees’ training. The sampling frame

was obtained from the current OITP database, which was updated the 1st of October,

2004. Since the OITP first provided their training fund for companies under the name of

the OITP in 1999, 432 companies have participated and are currently listed in the OITP

database.

The researcher conducted a census study, which studied all accessible populations.

The researcher chose all the companies that implemented and completed training

programs between January 2002 and June 2004. The reason to set the selecting criteria in

terms of time is to measure dependent variables accurately and appropriately with fewer

intervening variables. If the training program was completed more than three years ago,

senior managers’ perception on a specific training effectiveness might not be reliable due

to the length of time between the training and the survey. By shortening the time between

the training and completing the survey the impact of history may be somewhat controlled.

In addition, since the training impact may have been diminished over time, the researcher

will only include training programs that were completed within the past three years

(Jacobs, 2003).

67
On the other hand, it is necessary to have at least a six month interval between

completion of the training program and measuring the impact on organizational

performance. Researchers on training result measurement assert that generally a one year

interval is adequate for the service industry, and six months is appropriate for the

manufacturing industry (Banker et al., 2000). Consequently, the researcher will require at

least a six month interval between completion of a certain training program and

measurement of the training impact on the client’s organization. Based on these criteria,

189 companies implemented and completed training programs between January 2002 and

June 2004.

Several efforts were made to control errors. Descriptive studies have four kinds of

errors in survey. They are sampling error, selection error, frame error, and non-response

error. Sampling error refers to a nonrepresentative sample and it can be controlled by

using random sampling. Selection error can occur when some sampling units have a

greater chance to be selected than others. Because this study is not an inferential study,

sampling error and selection error do not need to be controlled. However, it was possible

that some population units might have a greater chance to be selected than others.

Because the researcher reviewed the current OITP database list and eliminated three

repetitive data, this error was controlled. However, any omitted companies cannot be

identified by the researcher.

Frame error refers to a discrepancy between the intended target population and the

actual population frame. The researcher controlled frame error by obtaining an up-to-date

database from the OITP. In addition, the researcher made sure that every company has

completed contact information for survey either telephone numbers or email address.

68
Non-response error occurs when a subject fails to respond, refuses to respond or

does not return the mailed questionnaire. The researcher made great efforts to increase

response rate by calling and by sending out emails to non-respondents before the deadline.

After the deadline, the researcher controlled non-response error by comparing data on

company's and training program's characteristics between respondents and non-

respondents.

The survey asked information regarding most independent variables and one

dependent variable data, operational margin. The survey asked whether a training needs

assessment was conducted or not, the extent of customization, nature of the contract with

training providers and participant’s perception on training effectiveness, and the changes

in operational margin.

Secondary data

The Economic Development Division, Ohio Department of Development

maintains and updates the OITP database which contains generic information about OITP

participant companies and their training programs. The analysis of the OITP database

provided a sampling frame to draw a survey list and basic participant’s demographic

information as well as basic participant training information. It provided information such

as the company’s training coordinator’s contact information either or both of phone

number and email address, company name, company's geographic location, company size,

industry sector with SIC code number, company ownership, the type of training, and

training start and end dates.

69
Operationalization of Variables

This section describes how to measure four constructs of this study: 1) the nature

of the provider-client relationship; 2) provider's training needs assessment; 3) the nature

of the training program; and 4) training program effectiveness.

Nature of the Provider-Client Relationship

The nature of the provider-client relationships waas measured by five,

independent sub-variables, level of involvement, contract history, knowledge of the

business of the client organizations, formality of the contract, and follow-up contact.

Level of involvement

This variable refers to the extent of external training provider’s involvement in

client organization’s training process. This variable was measured by the number of

stages of the training process in which the external training providers participated. The

tages of the training process are planning, developing, delivering, and evaluating the

training programs.

Contract history

This variable refers to how many previous experiences that the client organization

worked with the external training providers. It was measured by the number of the

previous contract that the client organization had with the external training providers.

Knowledge of the business of the client organizations

This variable refers to whether external training providers have enough

knowledge about client organizations’ business. This variable was measured through

training managers’ perception and external training providers' previous experience with

other company(s) in the same industry as client organizations.

70
Formality of the contract

This variable refers to whether client organizations have a formal contract with

their training providers in terms of training services provided. The variable of formality

was measured by written or verbal. Thus, it was assessed by four ordinal scales: 1) a

written contract at every time; 2) an initial written contract and later an oral contract; 3)

an oral contract at every time; or 4) neither written nor oral contract.

Follow-up contact

This variable refers to whether external training providers initiated follow-up

contact after completed the training programs.

Provider's Training Needs Assessment.

This variable refers to whether needs assessment for training was conducted

before implementing the actual training program. Training needs assessment was

assessed by the level of focus, timing of need to measure, and prioritizing of training

Level of focus

The level of focus refers to whether training needs assessment gathered

performance information in individual level, work process level, and/or function level.

Timing of need

The timing of need refers to whether training needs assessment identified

performance level before training, and whether training needs assessment project

performance level after training.

71
Prioritizing

Prioritizing of training issues focuses on if the training needs assessment actually

prioritizes training issues, and if current training programs reflected the result of training

needs assessment.

Nature of the Program

Extent of customization

This variable refers to the extent of customized level of a training program. The

variance of this variable is from no customized to an entirely developed training program.

Customization was measured by the total score of 10 items whether the item was

customized or not. These 10 items were developed based on components of a training

lesson plan. They are training objectives, trainee’s prerequisites, instructional materials in

data, instructional materials in format, instructional materials in equipment/tools, training

timetable, delivery method, performance test, and evaluation form.

Relationship to job

This variable refers to how the training is closed related to main tasks of training

participants. It was assessed by either job-specific or job-related.

Employee level

This variable refers to the level of employee who participated in the training. It

was assessed by four levels: 1) frontline employee, 2) supervisor, 3) manager, and 4)

executive.

Types of training

This variable refers to the types of training contents. It was measured by technical

training, managerial training, or awareness training.

72
Expected outcome

This variable refers to the intended goal of the training program, which is planned

ahead. This variable was assessed by selecting one of ten items, and the ten items was

developed based on the Kirkpatrick’s model. They are: 1) to have trainees gain

knowledge, 2) to increase trainees’ skills, 3) to change trainees’ behavior, 4) to change

trainees’ attitude, 5) motivate trainees, 6) to increase overall productivity, 7) to increase

customers’ satisfaction, 8) to increase employees’ satisfaction, 9) to increase company’s

market share, and 10) to increase company’s earnings (profits).

Training Program Effectiveness

Training program effectiveness refers to the degree to which the training reaches

the intended objective(s) or immediately expected outcome. This construct was measured

by the perceived training effectiveness and financial performance.

Perceived Training Effectiveness

Perceived training effectiveness refers to the senior managers’ perception of how

effectively one specific training program reached the intended goal of the training

program. Perceived training effectiveness was measured by the level of area and the

relativity. The level of area refers to whether training is effective in increasing such areas

employee development and organizational development. Each area was measured by 8

items. The relativity refers to how much the specific training program is more or less

effective than the training in general in client’s organizations. This was measured from

comparison between the senior managers’ perceptions on general training programs’

effectiveness on their organization, and perceptions on the specific training program’s

effectiveness in each level of area.

73
Financial Performance

Financial performance refers to the measurable, identifiable training results in a

monetary form on client’s organizational performance such as increasing market share,

reducing costs, and earnings. This construct was measured by the increases of operational

margin. An increase of operational margin was calculated by change in operational

margin between at the training starting point of time and six months after completing the

training program. Operational margin refers to earnings per total revenue. Operational

margin and increase of operational margin were calculated based on the following

equations.

Operational Margin = Operational Income (1 – tax rate) / Total Revenue

Increase of OM = (OM of before training – OM of after training) / OM of before

training

Instrument Development

Instruments for two surveys were developed by the researcher with assistance of

academic advisors and committee members at the Ohio State University This section

describes how we established instrument validity and reliability and designed the

instruments.

Instrument Validity

Instrument validity requires that the instrument measures what it is intended to

measure. To establish validity, the instruments were developed based on an extensive

literature review in human resource development, program evaluation, vocational

education, measurements, and higher education (Ary et al., 2002).

74
A panel of experts reviewed the questionnaires for content validity. The panel of

experts was composed of scholars, practitioners, and government officials. They are two

research method professors, one research method scholar, one HRD professor, one

Workforce Development and Education professor, four HRD practitioners who are in

industry, and two government officials in the economic development area. The experts

were asked to identify the clarity of the questionnaires. If the experts found any unclear

wording, they were asked to mark it and to propose more appropriate wording. The

experts were also asked to review the appropriateness of items, and, if there were any

inappropriate item, they were asked to mark it. They were also asked to provide any

suggestion on questionnaires to improve the questionnaire’s content validity.

Additionally, the research method experts were asked to review the format of the survey

and layout the questionnaires.

Instrument Reliability

Instrument reliability refers to the concept that the instrument measures

consistently cross samples what it should measure. For ensuring the reliability of the

instruments, the pilot study can be implemented. However, because this study is a census,

it was not possible to draw pilot samples, and the researcher did not pretest and posttest

to prevent reducing response rate. To insure of instruments reliability, the analysis of

internal consistency was conducted among responses. Cronbach’s alpha coefficient

shows the degree of internal consistency across items to measure one underlying

construct. It ranges from 0 to 1, and over .7 is accepted as a reliable level. Table 3.1

shows Cronbach’s alpha coefficients for the survey responses.

75
Variables Cronbach’s Alpha
General training effectiveness on employee development .878
General training effectiveness on organizational performance .910
Overall general training effectiveness .926
Specific training program effectiveness on employee
development .908
Specific training program effectiveness on organizational
performance .867
Overall specific training program effectiveness .945

Table 3.1: Cronbach’s Alpha Coefficients for the Survey Responses (n=44, Senior
Manager Survey)

Design of the Instruments

Survey to HRD manager

This survey was composed of four parts. Part One asks general background

information about the training program. This information included training start date, end

date, estimation of the number of training participants, estimation of the total number of

employees when the training began, duration of the training programs, training

participants’ employee level, how closely the training is related to main job, who

developed the training program, who delivered it, and intended goal of the training

program. In questions on who developed and delivered the training program, respondents

could select multiple answers among in-house training staff, community college, 4 year

university, vocational center, adult learning center, private training vendor, and non-for-

profit training agency or can specify others.

Questions in Part Two asked about training needs assessment, customization, and

nature of the relationship with external training providers, if they hired outside training

provider. In questions regarding training needs assessment and customization,

respondents were asked to answer either YES or No on every activity in each area. In

76
questions regarding relationships with training providers, respondents were asked to

select the status of their contract, and answer questions regarding the level of external

training provider’s involvement in training process, contract history, external training

providers’ knowledge in the business, and follow-up contact. Part three asked

respondents to provide a senior manager’s name and contact information for a matching

survey. Questions in the Part Four asked respondents’ demographic information such as

year of experience in that company, current position (training coordinator or manager,

HRD manger, HR manager, Supervisor or other), and year of the current position.

Survey to senior manager

The second survey was composed of two parts. Items in the Part One asked the

respondents’ perception on 16 statements of evaluation of training effectiveness in

general on their organization. Those items consist of a six point Likert-type scale

(strongly disagree, disagree, slightly disagree, slightly agree, agree, and strongly agree)

and not applicable. The same 16 statements were given to respondents to answer their

perception on effectiveness of the specific training programs funded by the OITP. They

were also asked to provide operational income and total revenue information.

Items in Part Two asked respondents’ demographic information such as year of

experience in that company, current position (Managing director, Operational VP,

Training manager, HRD VP or manager, HR VP or manager, Quality manager, Safety

manager and other), and year of the current position.

77
Research Procedures

Data Collection

The primary data was collected by mainly web-based survey to HRD manager or

training coordinators and by individualized web-based survey to matching senior

managers. Four surveys were collected through faxed survey questionnaires, and a phone

survey was also attempted. The following section describes how these two surveys were

implemented.

Survey to HRD manager

The first survey was distributed to training coordinators, training managers, HRD

managers, or human resource managers in a web-based format, and distributed by email.

The email asked recipients to participate in the survey which is sponsored by the

Department of Development, and the cover letter from the Assistant Manager at the OTIP

in the Department letterhead with the State government’s logo was attached to the email.

To ensure that the rights and welfare of the participants’ activities in research was

protected, the survey questionnaires obtained an approval from the Ohio State University

Institutional Review Board (See Appendix A for a sample of the email).

To increase response rate, an initial call was made to all contact people who had a

phone number. Through this phone call, the researcher could correct potential recipients'

email address. The email recipients could participate in the survey by clicking the link

address to the survey questionnaire website. After the participants completed and

submitted the questionnaire, their responses were sent out to the researcher’s email

account. Five business days after the first survey email was sent, a second email was sent

to encourage survey participation and to remind those who did not yet reply to participate.

78
Several additional phone calls were made and emails were sent out to encourage survey

participation before the deadline.

Survey to senior manager

After responses from the first survey were returned, the second survey was

conducted based on the first survey information. The second survey was designed to

measure perceived training effectiveness and information regarding operational margin

was emailed to senior managers. Because each survey should be customized to a

company due to the specific training program information, customized and individualized

web-survey questionnaires was sent out. The email asked recipients to participate in the

survey which is sponsored by the Department of Development, and the cover letter from

the Assistant Manager at the OTIP in the Department letterhead with the State

government’s logo was attached to the email. To ensure that the rights and welfare of the

participants’ activities in research was protected, the survey questionnaires obtained an

approval from an Ohio State University of Institutional Review Board (See Appendix B

for a sample of the email).

The email recipients could participate in the survey by clicking the link address to

the survey questionnaire website. After the participants completed and submitted the

questionnaire, their responses were sent out to the researcher’s email account. Five

business days after the first survey email was sent, a second email was sent to encourage

survey participation and to remind those who did not yet reply to participate. Several

follow up calls were made and emails were sent out to encourage senior managers to

participate in the survey before the deadline.

79
Data Analysis

After the data was collected, it was automatically coded and could be easily

transferred to the spreadsheet database. The data from the secondary data sources was

merged in one database and analyzed by using the Statistical Package for the Social

Science (SPSS). At first, demographic characteristics of companies, training programs,

and respondents was analyzed to determine descriptive statistics such as frequencies and

percentages. The demographic characteristics of companies were size, industry type, and

the number of years in business. The size of the company was presented based on the

reported number of employees. The demographic characteristics of training programs

were the program duration and the participant’s proportion of the whole employees. The

demographic characteristics of respondents are their job title, the number of years in the

current position, and the number of years in the company. Next, the proposed research

questions were analyzed as following.

Research Question One: Does training program effectiveness differ based on types of

training providers? Does the training provided by community colleges and/or four-year

universities result in a higher degree of training program effectiveness in comparison to

other external and/or internal training providers?

First, descriptive statistics was generated to summarize the proportion of different

types of training providers in partnership training. Frequencies and percentages were

calculated. Next, one-way ANOVA was applied to examine group differences in the

degree of perceived training effectiveness among different types of training providers.

The F value was calculated to see the difference. Third, t-test was performed to examine

80
only two group differences in perceived training effectiveness between higher

educational institutions and other types of training.

Research Question Two: Does the degree of training program effectiveness differ based

on the nature of the relationship among client organizations and external training

providers?

Standard multiple regression analysis was performed to examine the relationship

between perceived training effectiveness and the nature of the relationship. The model

regressed the degree of the perceived training effectiveness on a vector of five sub

independent variables: level of involvement, contract history, follow-up contact, the

training providers’ knowledge about the client organization’s business, and formality of

the contract. The squared multiple coefficient of correlation was presented to describe the

‘goodness of fit’ of the model by indicating the percentage of the variance in perceived

training effectiveness explained by the linear combination of five sub independent

variables. Because the formality of the contract was measured by four ordinal scales, they

were coded as separated dummy variables.

Research Question Three: Does the degree of training program effectiveness differ based

on the quality of provider’s training needs assessment?

The correlation statistics were used to determine the relationship between

perceived training effectiveness and the quality of training needs assessment. The value

of a correlation coefficient represents the extent to which two variables are related to each

other, with results ranging from a perfect positive relationship (1.00) through no

relationship (0.00) to a perfect negative (-1.00).

81
Research Question Four: How do training programs vary in terms of the extent of

customization, type of training, relationship to job, proportion of participants verses total

employees, and expected outcome? How do these differences in the nature of the training

program impact training program effectiveness?

First, descriptive statistics were generated to present how the nature of the

program varies regarding the five sub variables. Frequencies, percentages, mean, median,

range, and standard deviations were calculated. Next, standard multiple regression

analysis was employed to examine the relationship between perceived training

effectiveness and the nature of the program. The model regressed the degree of the

perceived training effectiveness on a vector of five sub variables. The squared multiple

coefficient of correlation was presented to describe the ‘goodness of fit’ of the model by

indicating the percentage of the variance in perceived training effectiveness explained by

the linear combination of five sub independent variables. Dummy coding was assigned to

employee level, types of training, and expected outcome since they are measured by

ordinal scales.

Research Question Five: Are perceived training program effectiveness and client

organization’s financial performance (operational margin) related? Is perceived training

effectiveness a good indicator of financial performance? Or vise versa?

The correlation statistics were used to determine the relationship between

perceived training effectiveness and the increase of the operational margin at client’s

organization. The value of a correlation coefficient represents the extent to which two

variables are related to each other, with results ranging from a perfect positive

relationship (1.00) through no relationship (0.00) to a perfect negative (-1.00).

82
CHAPTER 4

RESULTS

This chapter presents the results of the study. The first section presents the

descriptive statistics of the study. The second section reports the results for each research

question.

Descriptive Statistics

The first part of this section contains a description of companies that are in the

study population. The companies are grouped by 1) respondents, 2) non-respondents, and

3) not-in-sample frame companies. The second part presents the demographic

information of the 45 respondents in terms of 1) demographic information, 2) training,

and 3) company information.

The OITP database shows that 202 companies received training funds and

implemented training programs from January 2002 to June 2004. However, in attempting

to conduct a web-based survey, 77 companies were identified as ineligible to be included

in the sample frame. Seventeen out of 77 cases were omitted because the person in charge

of OITP is no longer employed and the position has not been reassigned. Eleven out of 77

cases were omitted due to incorrect contact information. Three cases reported that the

company was under different management. The remaining 46 cases could not be reached

83
due to incorrect email addresses or telephone numbers. Thus, the total number in the

sample frame is 125.

Participant companies completed two surveys to be included in the data analysis.

Fifty seven out of 125 companies replied to the first survey, the manager survey. Forty

five companies out of fifty seven replied to the second survey, the senior manager survey.

As a result, 45 companies out of 125 completed both surveys, resulting in a response rate

of 36 percent.

No significant difference was found among the three groups—respondents, non-

respondents, and not-in-sample frame companies—in terms of company characteristics.

Regarding their locations, companies are scattered in diversified counties. Respondents

(n=45) are located in 22 counties, non-respondents (n=80) are located in 42 counties, and

not-in-sample frame companies (n=77) are located in 38 counties. The ratio between

county verses company respondents is 1:2.05, the ratio of non-respondents is 1:1.90, and

the ratio of not-in-sample frame is 1:2.03. Table 4.1 presents frequencies and

percentages of type of industry of each group with the majority of companies found in

three groups in the manufacturing industry. Table 4.2 presents average company size in

terms of number of employees among the three groups. The average size of not-in-sample

frame group is 341.5 while the average of respondent group is 264.9 and the average of

non-respondent group is 267.8.

No significant differences were found when comparing the three groups in terms

of the number of training participants and the average of base wage of training

participants. Table 4.2 presents the average of training participants in respondents is

84
slightly lower than non-respondents by 1.7 percent, and lower than non-in-sample frame

companies by 7.7 percent. Table 4.2 also shows the average base wage of training

participants. The average base hourly wage in respondents group is slightly higher than in

the other two groups.

85
Non- Not-in-
Respondent Respondent Sample Frame
company company company

Manufacturing 33 (73.3%) 65 (81.3%) 62 (80.5%)

Transportation & public utilities 2 (4.4%) 5 (6.3%) 1 (1.3%)

Wholesale trade - 3 (3.8%) 1 (1.3%)

Retail trade 3 (6.7%) 3 (3.8%) 4 (5.2%)

Finance, insurance & real estate 1 (2.2%) - -

Services - 1 (1.3%) 2 (2.6%)

Public administration - 1 (1.3%) -

Missing 6 (13.3%) 2 (2.5%) 7 (9.1%)

Total 45 (100%) 80 (100%) 77 (100%)

Table 4.1: Frequency and Percentage of Type of Industry among Respondent Companies,
Non-respondent Companies, and Not-in-Sample Frame Companies (OITP database)

86
n Mean SD

Company Size

Respondent company 45 264.9 354.7

Non-Respondent company 80 267.8 428.4

Not-in-Sample Frame company 77 341.5 815.8

Number of Participants in Training

Respondent company 45 236.4 342.7

Non-Respondent company 80 240.5 458.0

Not-in-Sample Frame company 77 254.6 398.9

Base Wage of Training Participants

Respondent company 44 $15.80 $5.29

Non-Respondent company 80 $14.44 $4.55

Not-in-Sample Frame company 76 $15.09 $4.75

Table 4.2: Company size, Number of Participants in Training, and Base Wage of
Training Participants among Respondent Companies, Non-respondent Companies, and
Not-in-Sample Frame Companies (OITP database)

87
Respondents

From this part, the researcher analyzed 45 completed cases from both manager

and senior manager surveys. This part presents the demographic information of the 45

respondents. The first two sets represent demographic information from two respondents

of each company. The other two sets represent demographic information from

participating companies and their training programs.

The demographic information collected for both respondents includes: number of

years with the company, current title, years in current title, and sex. The frequencies and

percentages are presented in tables 4.3 and table 4.4. Results show that the average length

of service for the manager survey respondents’ is 10.91 year (SD=9.47), and the average

length of service for the senior manager survey respondents’ is 12.08 year (SD=9.15).

The manager survey respondents consist of an equal number of females and males, and

the senior manager survey respondents are predominantly male.

Current job titles for the manager survey respondents were reported as follows:

executives (31.1%), HR managers (26.7%), and training coordinators or managers

(17.8%). Only one respondent reported his/her title as HRD manager. Those who

reported “Other” included job titles such as controller, paralegal, production/inventory

assistant, and quality coordinator.

Current job titles for the second group of respondents were reported as follows:

HR VP or manager (20%), Operational VP (17.8%), and Managing director (13.3%).

Only one respondent reported his/her title as HRD VP or manager. Those who reported

“Other” included job titles such as accounting manager, controller, director of

88
transportation, director finance & information services, IT manager, material coordinator,

QA inspector, and research scientist.

Table 4.5 presents demographic information of participating companies.

Company size (number of employees) varies from 5 employees to over 1,700 employees.

The average company size is 389 employees (SD=399). Over 40 percent of companies

range from 100 and 299 employees. The number of years doing business in Ohio ranges

from 4 years to 120 years. The average length of time doing business in Ohio is 43.3

years (SD=33.7).

Most participant companies (73.3%) are classified as “Manufacturing” according

to SIC code. The remaining companies represent the service and trade industry (Table

4.3). Seven participant companies (15.6%) are either publicly owned or subsidies of

publicly owned companies, and the majority (84.4%) are either privately owned or

subsidies of privately owned companies. Participant companies are located throughout

Ohio. Five companies (11.1%) are located in Cuyahoga county, and 4 companies (8.9%)

are located in Summit county. The majority of companies are located throughout the state

in 21 counties ranging from one, two, or three from each county. Figure 4.1 shows survey

participant companies distribution in counties of Ohio.

Table 4.6 describes training program characteristics. Forty four percent of the

training programs (20 programs) include technical training and managerial training.

Awareness training makes up 20 percent of the programs. The length of training

programs ranges from short to long with 33 percent being longer than 8 weeks, and 27

89
percent being shorter then 2 weeks. The length of training programs shape is bimodal.

The mean training duration is 13.48 weeks (SD=18.40).

The number of participants in training ranges from 3 to 1,210 (M=99 people,

SD=210). Less than 30 people participate in half of the training programs (53.3%). All

levels of employees (frontline, supervisor, manager, and executive) participate in 5

training programs (11.1%). Only 12 training programs (26.7%) have one level of

participant, and the remaining programs have more than one level of employee

participating.

90
n %
Number of Years at the Company
0 – 5 years 15 33.3
6 – 10 years 11 24.4
11 – 15 years 5 11.1
16 – 20 years 1 2.2
More than 20 years 10 22.2
Missing 3 6.7
Job Position
Training Coordinator or Manager 8 17.8
HRD Manager 1 2.2
HR Manager 12 26.7
Supervisor 1 2.2
Executive 14 31.1
Other Manager 2 2.2
Other 5 11.1
Missing 3 6.7
Number of Years at the Current Job
0 – 5 years 20 44.4
6 – 10 years 15 33.3
11 – 15 years 3 6.7
16 – 20 years 2 4.4
More than 20 years 2 4.4
Missing 3 6.7
Sex
Male 26 57.8
Female 19 42.2

Table 4.3: Demographic Information for Manager Survey Respondents (Manager survey,
n=45)

91
n %
Number of Years at the Company
0 – 5 years 13 28.9
6 – 10 years 10 22.2
11 – 15 years 6 13.3
16 – 20 years 8 17.8
More than 20 years 6 13.3
Missing 2 4.4
Job Position
Managing Director 6 13.3
Operational VP 8 17.8
HRD VP or Manager 1 2.2
HR VP or Manager 9 20.0
Quality Manager 1 2.2
Safety Manager 1 2.2
Plant Manager 4 8.9
President/Partner/Owner 3 6.7
Production/Operational Manager 1 2.2
Other 9 20.0
Missing 2 4.4
Number of Years at the Current Job
0 – 5 years 23 51.1
6 – 10 years 13 28.9
11 – 15 years 3 6.7
16 – 20 years 2 4.4
more than 20 years 2 4.4
Missing 2 4.4
Sex
Male 32 71.1
Female 13 28.9

Table 4.4: Demographic Information for Senior Manager Survey Respondents (Senior
manager survey, n=45)

92
n %
The Size (Number of Employees)
1–9 3 6.7
10 – 49 8 17.8
50 – 99 6 13.3
100 – 299 19 42.2
300 – 999 6 13.3
Over 1,000 3 6.7
Number of Years at the Business in Ohio
0 – 10 years 9 20.0
11 – 20 years 7 15.6
21 – 30 years 1 2.2
31 – 40 years 5 11.1
41 – 50 years 2 4.4
Over 50 years 17 37.8
Missing 4 8.9
Industry Type*
Manufacturing 33 73.3
Transportation & public utilities 3 6.7
Wholesale trade 2 4.4
Retail trade 3 6.7
Finance, insurance & real estate 1 2.2
Services 1 2.2
Missing 4 8.9
Ownership
Privately owned 38 84.4
Publicly owned 7 15.6
* Two companies have two industry types so the total sample is 47.

Table 4.5: Demographic Information of Participant Companies (OITP database and


Manager survey, n=45)

93
n %
Types of Training
Technical 20 44.4
Managerial 10 22.2
Awareness 9 20.0
Missing 6 13.3
Duration
0 – 2 weeks 12 26.7
3 – 4 weeks 3 6.7
5 – 8 weeks 5 11.1
More than 8 weeks 15 33.3
Missing 10 22.2
Number of Participants
1 – 9 employees 7 15.6
10 – 19 employees 8 17.8
20 – 29 employees 4 8.9
30 – 39 employees 6 13.3
40 – 49 employees 4 8.9
50 – 99 employees 8 17.8
100 – 999 employees 7 15.6
Over 1,000 employees 1 2.2
Trainees’ Employment Level in Company*
Frontline employee 40 88.9
Supervisor 30 66.7
Manager 24 53.3
Executive 10 22.2
Number of Employment Level Participated in
Training
1 Level 12 26.7
2 Levels 12 26.7
3 Levels 16 35.6
4 Levels 5 11.1
* Companies can answer more than one category as their trainees’ level in companies.

Table 4.6: Demographic Information of the Training Programs (Manager survey, n=45)

94
Figure 4.1. Number of Survey Participant Companies in each County in Ohio (OITP
database, n=45)

95
Results for Each Research Question

Research Question One: Does training program effectiveness differ based on types of

training providers? Does the training provided by community colleges and/or four-year

universities result in a higher degree of training program effectiveness in comparison to

other external and/or internal training providers?

Training programs were developed and delivered by various entities. Table 4.7

presents the frequencies and percentage of types of training providers by program

development or program delivery. The majority of training programs were developed

(66.7%), and delivered by private companies (55.6%), however, some were developed

and delivered by cooperation with in-house training staff within client companies.

Compared to private companies, educational institution involvement in training

development and delivery is relatively small (11.1%). Private training providers include

private training vendors and equipment manufacturers. Educational institutions include

two-year colleges, four-year universities, and vocational/training center.

96
Types of Training Provider n %

Training Program Development

In-house staff 26 57.8

Educational institution 5 11.1

Private company 30 66.7

Training Program Delivery

In-house staff 24 53.3

Educational institution 5 11.1

Private company 25 55.6

Table 4.7: Frequencies and Percentages of Type of Training Providers (Manager survey,
n=45)

97
One-way analysis of variance was applied to examine group differences among

the degree of training effectiveness by different types of training providers. Table 4.8

presents group differences in the degree of perception on the specific training program

effectiveness. Table 4.8 shows the value of F (2, 41) is .037 and is not significant (p>.10),

which implies that there is no significant difference among training developer. Another

value of F (2, 41) is .901 and is not significant (p>.10), which implies that there is no

significant difference based on training deliverer.

Source Sum of Squares Df Mean F Sig.


Square
Training developer .031 2 .016 .037 .964

Error 17.230 41 .420

Total 17.261 43

Training deliverer .088 2 .044 .901 .901

Error 17.173 41 .419

Total 17.261 43

Table 4.8: One-way ANOVA on Type of Training Provider-Perception on Effectiveness


of this Specific Program (Manager survey and Senior manager survey, n=43)

Table 4.9 presents group differences in degree of perception on the relative

effectiveness of the training program. Table 4.9 shows the value of F (2, 41) is .205 and

is not significant (p>.10), which implies that there is no significant difference among

98
training developer. Another value of F (2, 41) is .360 and is not significant (p>.10),

which implies that there is no significant difference among training deliverer.

Source Sum of Df Mean Square F Sig


Squares
Training developer .071 2 .036 .205 .816

Error 7.153 41 .174

Total 7.225 43

Training deliverer .125 2 .062 .360 .700

Error 7.100 41 .173

Total 7.225 43

Table 4.9: One-way ANOVA on Type Training Provider-Perception on Relative


Effectiveness (Manager survey and Senior manager survey, n=43)

Table 4.10 presents group differences in terms of increase in operational margins.

As can be seen Table 4.8, the value of F (2, 11) is 1.941 and is not significant (p>.10),

which implies that there is no significant difference among training developer. Another

value of F (2, 11) is 2.391 and is not significant (p>.10), which implies that there is no

significant difference among training deliverer.

99
Source Sum of df Mean F Sig
Squares Square
Training developer 11.153 2 5.577 1.941 .190

Error 31.610 11 2.874

Total 42.763 13

Training deliverer 12.958 2 6.479 2.391 .137

Error 29.805 11 2.710

Total 42.763 13

Table 4.10: One-way ANOVA on Type Training Provider-Increase in Operational


Margin (Manager survey and Senior manager survey, n=13)

Independent sample t-tests were performed to examine two group differences in

training program effectiveness between in-house training staff involvement and no in-

house training staff involvement. The result of t-tests in Table 4.11 shows there is no

significant difference between the two groups. The t value in terms of perception on this

specific training program effectiveness is .239 and is not significant (p>.10). The t value

in terms of perception on relative effectiveness of training program is .889 and is not

significant (p>.10). The t value in terms of increase in operational margin is .475 and is

not significant (p>.10).

100
Training t df Sig. Mean SE
Effectiveness Difference
Perception of this
specific training .239 42 .812 .047 .196
program (n=43)
Perception of
relative .889 25.065 .382 .123 .138
effectiveness
(n=43)
Increase in
operational .475 12 .643 .480 1.010
margin (n=13)

Table 4.11: Independent Samples t-test on In-house Training Staff and Training Program
Effectiveness (Manager survey and Senior manager survey)

Table 4.12 shows there is no significant difference regarding educational training

provider involvement. The t value in terms of perception of this specific training

program effectiveness is -.245 and is not significant (p>.10). The t value in terms of

perception of relative effectiveness of training program is .497 and is not significant

(p>.10). The t value in terms of increase in operational margin is -1.160 and is not

significant (p>.10).

101
Training t Df Sig. Mean SE
Effectiveness Difference
Perception of this
training program -.245 42 .808 -.075 .304
(n=43)
Perception of
relative .497 42 .622 .098 .196
effectiveness
(n=43)
Increase in
operational -1.160 12 .269 -2.155 1.858
margin (n=13)

Table 4.12: Independent Samples t-test on Educational Institution Providers and Training
Program Effectiveness (Manager survey and Senior manager survey)

Table 4.13 presents the results of independent samples t-test regarding private

training provider involvement. The t value in terms of perception on this specific training

program effectiveness is .052 and is not significant (p>.10). The t value in terms of

perception on relative effectiveness of training program is .197 and is not significant

(p>.10). However, the t value in terms of increase in operational margin is 2.888 and is

significant (p>.05). This implies that there is a statistical difference in the increase in

operational margin between the two groups.

Thus, the results show that client organization’s perceived training effectiveness

does not differ based on the type of training providers and that training provided by

community colleges and/or four-year universities does not result in a different degree of

training program effectiveness in comparison to other external and/or internal training

providers. However, the results show that there is a statistical difference in the increase in

102
operational margin between the training programs where private training providers were

involved versus the programs that did not involved private training providers.

Training t df Sig. Mean SE


Effectiveness Difference
Perception of this
training program .052 42 .959 .011 .204
(n=43)
Perception of
relative .197 42 .845 .026 .132
effectiveness
(n= 43)
Increase in
operational 2.888* 11.976 .014 1.929 .668
margin (n=13)
* p<.05

Table 4.13: Independent Samples t-test on Private Training Providers and Training
Program Effectiveness (Manager survey and Senior manager survey)

Research Question Two: Does the degree of training program effectiveness differ based

on the nature of the relationship among client organizations and external training

providers?

Standard multiple regression analysis was performed on both manager and senior

manager surveys to examine the relationship between training program effectiveness and

the nature of the relationship with external training providers. The model regressed the

degree of the training program effectiveness on a vector of five independent variables:

external training provider’s level of involvement in training process, contract history with

this particular training provider, whether the providers initiated a follow-up contact, the

103
external training provider’s knowledge and experience in client organization’s business,

and degree of formality of the contract.

The squared multiple coefficient of correlation is presented to describe the

goodness of fit of the model by indicating the percentage of the variance in perceived

training effectiveness explained by the linear combination of five independent variables.

Because the formality of the contract is measured by four ordinal scales, they were coded

as separated dummy variables.

Table 4.14, table 4.16, and table 4.18 present the results of the correlation

statistics among the variables. Table 4.15, table 4.17, and table 4.19 present the results of

standard multiple regression analysis. For perception of this specific training program

effectiveness, none of the standardized coefficients (beta) of independent variables shows

a significant relationship with this perception at an alpha level of .05 (Table 4.15). The

squared multiple coefficient of correlation (Adjusted R square) is 0.

For perception of relative training program effectiveness, none of the

standardized coefficients of independent variables shows a significant relationship with

this perception at an alpha level of .05 (Table 4.17). The squared multiple coefficient of

correlation is 0.

In terms of increase in operational margin, four standardized coefficients show a

significant relationship (Table 4.19). The external training provider’s level of

involvement in the training program process is shown to have a significant relationship

with increase in operational margin (Beta = .376, p<.05). This implies that operational

margin of program where the external training providers are involved in additional stages

104
of the training process increases more than program where the external training providers

are not involved in additional stages. External training provider’s knowledge and

experience in client organization’s business is also shown as significant with increase in

operational margin (Beta =.655, p<.05). This implies that programs having external

training providers with more knowledge and experience in the client organization’s

business have greater increases in operational margin when compared with programs

where the external training provider has less knowledge and experience.

There is a negative relationship between increase in operational margin and

whether the contract was written every time (Beta = -.685, p<.05). It implies that the

program operational margin increases when clients write down every contract in

comparison to those that do not. There is also negative relationship between increase in

operational margin and whether the contract was first written and later verbal (Beta = -

.398, p<.01). The squared multiple coefficient of correlation is .889, which implies that

the group of independent variables explains 88.9 percent of the total variance of increased

operational margin.

Thus, the results show that the degree of training program effectiveness differs

based on the nature of the relationship among client organizations and external training

providers. The increase in operational margin differs based on the external training

provider’s level of involvement in the training program process, the provider’s

knowledge and experience in the client organization’s business, and the formality of the

contract

105
1 2 3 4 5 6 7 8 Mean SD

1. Provider’s level of 1.000 .531** .834** .410* -.476** .004 -.164 -.301* 1.98 1.602
involvement
2. Contract history with providers 1.000 .639** -.056 -.227 .194 -.193 -.105 2.11 2.622

3. Provider’s knowledge & 1.000 .252 -.728** .136 .050 -.314* 2.53 1.727
experience
4. Provider’s follow-up 1.000 -.415* -.135 .224 -.102 .84 .374

5. Contract Ia 1.000 -.112 -.093 .012 .03 .183

6. Contract IIb 1.000 -.302 .169 .27 .450


106

7. Contract IIIc 1.000 -.106 .20 .407

8. Perception of the specific 1.000 4.85 .634


training program effectiveness
a
0 = Any form of contract; 1 = Neither written nor verbal contract
b
0 = Not the case of first written and later verbal contract; 1 = First written and later verbal contract
c
0 = Not a written contract in every time; 1 = Written contract in every time

Table 4.14: Summary Data: Regression of Perception of the Specific Training Program Effectiveness on Selected Variables in
the Relationship with External Training Providers (Manager survey and Senior manager survey)
Variables R2 R2 change b Beta
Provider’s Level of Involvement .017 .017 -.113 -.187
Contract History with Providers .036 .019 .080 .315
Provider’s Knowledge & Experience .107 .071 -.664 -.676
Provider’s Follow-up .108 .000 -.035 -.020
Contract .221 .113
Contract I – neither written nor verbal -1.839 -.513
Contract II– first written, later verbal .157 .107
Contract III– written contract in every time -.100 -.062
107

(Constant) 7.253

Standard error = .678

Table 4.15: Standard Multiple Regression of Perception of the Specific Training Program Effectiveness on Selected Variables in
the Relationship with External Training Providers (n = 28) (Manager survey and Senior manager survey)
1 2 3 4 5 6 7 8 Mean SD

1. Provider’s level of 1.000 .531** .834** .410* -.476** .004 -.164 .032 1.98 1.602
involvement
2. Contract history with providers 1.000 .639** -.056 -.227 .194 -.193 -.105 2.11 2.622

3. Provider’s knowledge & 1.000 .252 -.728** .136 .050 -.029 2.53 1.727
experience
4. Provider’s follow-up 1.000 -.415* -.135 .224 .059 .84 .374

5. Contract Ia 1.000 -.112 -.093 -.099 .03 .183

6. Contract IIb 1.000 -.302 .330 .27 .450


108

7. Contract IIIc 1.000 -.232 .20 .407

8. Perception of the specific 1.000 -.06 .410


training program effectiveness
a
0 = Any form of contract; 1 = Neither written nor verbal contract
b
0 = Not the case of first written and later verbal contract; 1 = First written and later verbal contract
c
0 = Not a written contract in every time; 1 = Written contract in every time

Table 4.16: Summary Data: Regression of Perception of the Relative Training Program Effectiveness on Selected Variables in
the Relationship with External Training Providers (Manager survey and Senior manager survey)
Variables R2 R2 change b Beta
Provider’s Level of Involvement .016 .016 .013 .032
Contract History with Providers .026 .010 -.021 -.122
Provider’s Knowledge & Experience .032 .005 -.163 -.249
Provider’s Follow-up .032 .000 .093 .081
Contract .202 .171
Contract I – neither written nor verbal -.591 -.247
Contract II– first written, later verbal .296 .303
Contract III– written contract in every time -.205 -.191
109

(Constant) .441

Standard error = .457

Table 4.17: Standard Multiple Regression of Perception of the Relative Training Program Effectiveness on Selected Variables in
the Relationship with External Training Providers (n = 28) (Manager survey and Senior manager survey)
1 2 3 4 5 6 7 8 Mean SD

1. Provider’s level of 1.000 .531** .834** .410* -.476** .004 -.164 -.018 1.98 1.602
involvement
2. Contract history with providers 1.000 .639** -.056 -.227 .194 -.193 -.255 2.11 2.622

3. Provider’s knowledge & 1.000 .252 -.728** .136 .050 -.329 2.53 1.727
experience
d
4. Provider’s follow-up 1.000 -.415* -.135 .224 .84 .374

5. Contract Ia 1.000 -.112 -.093 d


.03 .183

6. Contract IIb 1.000 -.302 -.257 .27 .450


110

7. Contract IIIc 1.000 -.637* .20 .407

8. Increase in Operational Margin 1.000 -.655 1.813


a
0 = Any form of contract; 1 = Neither written nor verbal contract
b
0 = Not the case of first written and later verbal contract; 1 = First written and later verbal contract
c
0 = Not a written contract in every time; 1 = Written contract in every time
d
cannot be computed because at least one of the variables is constant.

Table 4.18: Summary Data: Regression of Increase in Operational Margin on Selected Variables in the Relationship with
External Training Providers (Manager survey and Senior manager survey)
Variables R2 R2 change b Beta
Provider’s Level of Involvement .459 .459 .696 .376*
Contract History with Providers .513 .054 -.176 -.201
Provider’s Knowledge & Experience .622 .108 4.114 .655*
Contract .951 .329
Contract II– first written, later verbal -1.877 -.398*
Contract III– written contract in every time -3.226 -.685**
(Constant) -17.664
111

Standard error = .661

Table 4.19: Standard Multiple Regression of Increase in Operational Margin on Selected Variables in the Relationship with
External Training Providers (n = 10) (Manager survey and Senior manager survey)
Research Question Three: Does the degree of training program effectiveness differ based

on the quality of provider’s training needs assessment?

Independent sample t-tests were performed to examine two group differences in

training program effectiveness between implementation of training needs assessment and

no training needs assessment. The result of the t-test in Table 4.20 shows there is no

significant difference between the two groups. The t value in terms of perception on this

specific training program effectiveness is -.769 and is not significant (p>.10). The t value

in terms of perception on relative effectiveness of the training program is -.751 and is not

significant (p>.10). The t value in terms of increasing operational margin is 1.203 and is

not significant (p>.10). This implies that there is no statistical difference between the two

groups in terms of implementing a training needs assessment and training program

effectiveness.

Training T df Sig Mean SE


Effectiveness Difference
Perception of this
training program -.769 42 .446 -.152 .197
(n=43)
Perception of
relative -.751 42 .457 -.096 .128
effectiveness
(n=43)
Increase in
operational 1.203 12 .252 1.023 .850
margin (n=13)

Table 4.20: Independent Samples t-test on Training Needs Assessment and Training
Program Effectiveness (Manager survey and Senior manager survey)

112
Correlational statistics were used to determine the relationship between

perceived training effectiveness and the quality of the training needs assessment. The

value of a correlation coefficient represents the extent to which two variables are related

to each other, with results ranging from a perfect positive relationship (1.00) through no

relationship (0.00) to a perfect negative (-1.00).

Table 4.21 presents the results of the correlation statistics. The correlation

coefficient between the perception of the specific training program effectiveness and the

quality of training needs assessment is .175, and is not statistically significant (p<.10).

The correlation coefficient between the perception of the relative training effectiveness

and the quality of training needs assessment is -.036, and is not statistically significant

(p<.10). The correlation coefficient between the increase in operational margin and the

quality of training needs assessment is -.287 and is not significant (p<.10). These imply

that there are no significant relationships between training program effectiveness and the

quality of training needs assessment.

Thus, the results show that training program effectiveness does not differ based

on the quality of training needs assessment.

113
1 2 3 4

1. Perception of the specific training 1.000 .429* -.283 .175


program effectiveness
2. Perception of relative effectiveness 1.000 -0.74 -.036

3. Increase in operational margin 1.000 -.287

4. Quality of Training Needs Assessment 1.000

*p<.01

Table 4.21: Correlation Matrix between the Quality of Training Needs Assessment and
Training Program Effectiveness (Manager survey and Senior manager survey)

Research Question Four: How do training programs vary in terms of the extent of

customization, type of training, relationship to job, proportion of participants verses total

employees, and expected outcome? How do these differences in the nature of the training

program impact training program effectiveness?

First, descriptive statistics are presented in table 4.6, table 4.22, and table 4.23 to

show how the training program varies regarding the five sub variables. As table 4.6

shows, the type of training varies from technical training (44.4%), to managerial training

(22.2%) and awareness training (20%). The proportion of training participants among

total employees also varies. Table 4.23 shows that 11 training programs (24.3%) have

less than 10 percent of participants versus total employees, and that all employees

participated in 13 training programs (28.9%). For the remaining 21 training programs

participation varies between 10 and 99 percent.. The average participants’ proportion

among total employees is 50 percent (SD=40.0 %).

114
Table 4.22 shows that most training programs are job-specific (n=32, 71.1%)

rather than job-related (n=13, 28.9%). Almost half (44.4%) of the training program’s

intended goal is related to “to increase knowledge and skills”, and the other half (42.2%)

reported ones to be related to “increasing organizational performance” as the intended

training program goal. Other specified training goals are “ISO certificate” and “to

develop and sustain a positive culture.”

In terms of customization, only one training program has never customized any

part of the training program, and the other 44 training programs customized at least more

than one item of the training program with client companies materials or to meet

participant’s needs. The average number of customized items per training program is 5.67

(SD=2.67) among 10 items. Table 4.23 presents distribution of the number of customized

items. Among customized items, instructional materials are most frequently customized

(n=39, 86.7%), and training objectives or trainees’ prerequisites are customized next

frequently (n=36, 80%). Table 4.22 shows that 35 training programs (77.8%) were

developed with some level of customization.

115
N %

Intended Goal of Training

Increase Knowledge & Skills 20 44.4

Change Behavior 4 8.9

Increase Organizational Performance 19 42.2

Other 2 4.4

Relationship to Job

Job-specific 32 71.1

Job-related 13 28.9

Extent of Customization

No customization 1 2.2

Customized 9 20.0

Entirely developed 35 77.8

Table 4.22: Frequencies and Percentages on Intended Goal of Training, Level of


Relationship to Job, and Level of Customization ((Manager survey, n=45)

116
N %
Participants’ Portion among Total Employees
0 – 9.99 % 11 24.3
10 – 19.99 % 4 8.9
20 – 29.99 % 2 4.4
30 – 39.99 % 5 11.1
40 – 49.99 % 1 2.2
50 – 59.99 % 5 11.1
60 – 69.99 % 0 0.0
70 – 79.99 % 1 2.2
80 – 89.99 % 2 4.4
90 – 100.00 % 14 31.1
Number of Items to be Customized
0 – 3 items 8 17.8
4 – 6 items 18 40.0
7 – 10 items 19 42.2
Customized Training Item
Objectives or Trainees’ Prerequisites 36 80.0
Instructional Materials 39 86.7
Timetable 23 51.1
Delivery method 24 53.3
Evaluation or Test Forms 25 55.6

Table 4.23: Distribution on Training Participants’ Portion among Total Employees and
Level of Customization (Manager survey, n=45)

117
Next, standard multiple regression analysis was employed to examine the

relationship between training program effectiveness and the nature of the training

program on manager and senior manager surveys. The model regressed the degree of the

training program effectiveness on a vector of five sub variables. The five variables

include type of training, proportion of training participants verses total employees,

training relationship to job, extent of customization, and expected outcome. Type of

training and expected outcome were dummy coded since they are measured by ordinal

scales. Relationship to job was coded as a dichotomy variable since a training program is

either job-specific or job-related.

Table 4.24, table 4.26, and table 4.28 present the results of the correlation

statistics among the variables. Table 4.25, table 4.27, and table 4.29 present the results of

standard multiple regression analysis. For perception of this specific training program

effectiveness, six standardized coefficients of independent variables shows statistically

significant relationship with this perception at the alpha level of .05 (Table 4.25). First,

the beta of portion of trainees among employees is -.344 (p<.05), which means that if the

number of trainees among employees increases, perception of this specific training

program effectiveness can be predicted to decrease. Second, the beta of intended goal to

increase learning is -1.230 (p<.05), which means that if the intended goal is to increase

learning, perception on this specific training program effectiveness can be predicted to

decrease. The beta of intended goal to change behavior is -1.178 (p<.01), which also

means that if the intended goal is to change behavior, perception of this specific training

118
program effectiveness can be predicted to decrease. The beta of intended goal to increase

organizational performance is -1.244 and is significant (p<.05).

The beta of entirely development of the program is .469 (p<.01), which implies

that if the training program is entirely developed, perception of this specific training

program effectiveness is predicted to increase. However, the beta of extent to

customization is -.393 (p<.05), which means that if the number of customized training

parts increases, perception of this specific training program effectiveness is predicted to

decrease. The squared multiple coefficient of correlation is .376, which means that the

group of independent variables explains 37.6 percent of the total variance of the

perception of this specific training program’s effectiveness.

In terms of perception of relative training program effectiveness, three

standardized coefficients show a significant relationship (Table 4.27). The beta of

proportion of trainees among employees is also shown as a negative relationship with

perception of relative training program effectiveness (Beta = -.414, p<.05). The beta of

relationship to job is .369 (p<.10), which implies that if the training contents are job-

specific, perception of relative training program effectiveness is higher by .369 than if

training contents are job-related.

The beta of entirely development of the program is also positive and very

strongly significant. It is .486 (p<.01), which means that if the training program is

entirely developed, perception of relative training program effectiveness is predicted to

increase. The squared multiple coefficient of correlation is .264, which means that the

119
group of independent variables explains 26.4 percent of the total variance of perception

relative training program effectiveness.

In terms of increase in operational margin, two standardized coefficients show a

negative relationship (Table 4.29). The beta of entirely developed program is -3.504

(p<.10), which means the operational margin of the program which is entirely developed

increases less than the program which is not entirely developed. The beta of technical

type of training is -2.237 (p<.10). The squared multiple coefficient of correlation is .367.

Thus, the results show that training program effectiveness differs based on the

nature of the training program, particularly in terms of proportion of training participants

versus total employees, training relationship to job, and extent of customization.

120
1 2 3 4 5 6 7 8 9 10 Mean SD
1. Proportion of trainees 1.000 -.242 -.165 .044 .078 .076 .066 -.366* .083 -.317* .50 .396
among employees
2. Job specifica 1.000 .274 -.145 -.150 -.105 -.006 .573** -.372* .291 .71 .458
3. Intended goal Ib 1.000 -.279 -.765* .048 .181 .243 .019 .072 .44 .503
c
4. Intended goal II 1.000 -.267 .167 -.109 -.009 .189 -.346* .09 .288
5. Intended goal IIId 1.000 -.084 -.148 -.127 -.190 .045 .42 .499
6. Entirely developede 1.000 .318* .133 .076 .092 .78 .420
7. Level of 1.000 .127 -.124 -.199 5.67 2.663
customization
8. Type of training If 1.000 -.602** .133 .51 .506
9. Type of training IIg 1.000 -.142 .26 .442
10. Perception of the 1.000 4.85 .634
121

specific training
program effectiveness
a
0 = Job-related training program; 1 = Job-specific training program
b
0 = Intended goal is not to increase learning; 1 = Intended goal is to increase learning
c
0 = Intended goal is not to change behavior; 1 = Intended goal is to change behavior
d
0 = Intended goal is not to increase organizational performance; 1 = Intended goal is to increase organizational performance
e
0 = A training program was not entirely developed; 1 = A training program was entirely developed
f
0 = Type of training is not managerial; 1 = Type of training is managerial
g
0 = Type of training is not technical; 1 = Type of training is technical

Table 4.24: Summary Data: Regression of Perception of the Specific Training Program Effectiveness on Selected Variables in
Training Program Characteristics (Manager survey and Senior manager survey)
Variables R2 R2 change b Beta
Proportion of trainees among employees .124 .124 -.592 -.344*
Job specific .162 .038 .209 .143
Intended goal .332 .170
Intended goal I -1.665 -1.230*
Intended goal II -2.541 -1.178**
Intended goal III -1.649 -1.244*
Entirely developed .408 .076 .704 .469**
Level of customization .525 .117 -.095 -.393*
122

Type of training .527 .003


Type of training I -.084 -.063
Type of training II -.117 -.075
(Constant) 6.782

Standard error = .530

Table 4.25: Standard Multiple Regression of Perception of the Specific Training Program Effectiveness on Selected Variables in
Training Program Characteristics (n = 37) (Manager survey and Senior manager survey)
1 2 3 4 5 6 7 8 9 10 Mean SD
1. Proportion of trainees 1.000 -.242 -.165 .044 .078 .076 .066 -.366* .083 -.411** .50 .396
among employees
2. Job specifica 1.000 .274 -.145 -.150 -.105 -.006 .573** -.372* .300* .71 .458
3. Intended goal Ib 1.000 -.279 -.765* .048 .181 .243 .019 -.014 .44 .503
4. Intended goal IIc 1.000 -.267 .167 -.109 -.009 .189 .060 .09 .288
5. Intended goal IIId 1.000 -.084 -.148 -.127 -.190 -.028 .42 .499
6. Entirely developede 1.000 .318* .133 .076 .293 .78 .420
7. Level of 1.000 .127 -.124 -.093 5.67 2.663
customization
8. Type of training If 1.000 -.602** .272 .51 .506
9. Type of training IIg 1.000 -.193 .26 .442
10. Perception of the 1.000 -.06 .410
123

specific training
program effectiveness
a
0 = Job-related training program; 1 = Job-specific training program
b
0 = Intended goal is not to increase learning; 1 = Intended goal is to increase learning
c
0 = Intended goal is not to change behavior; 1 = Intended goal is to change behavior
d
0 = Intended goal is not to increase organizational performance; 1 = Intended goal is to increase organizational performance
e
0 = A training program was not entirely developed; 1 = A training program was entirely developed
f
0 = Type of training is not managerial; 1 = Type of training is managerial
g
0 = Type of training is not technical; 1 = Type of training is technical

Table 4.26: Summary Data: Regression of Perception of the Relative Training Program Effectiveness on Selected Variables in
Training Program Characteristics (Manager survey and Senior manager survey)
Variables R2 R2 change b Beta
Proportion of trainees among employees .159 .159 -.453 -.414**
Job specific .230 .071 .342 .369*
Intended goal .259 .029
Intended goal I -.222 -.258
Intended goal II -.087 -.063
Intended goal III -.227 -.270
Entirely developed .372 .113 .465 .486***
Level of customization .402 .030 -.033 -.213
124

Type of training .443 .042


Type of training I -.233 -.276
Type of training II -.288 -.291
(Constant) .136

Standard error = .366


*p<.10
**p<.05
***p<.01

Table 4.27: Standard Multiple Regression of Perception of the Relative Training Program Effectiveness on Selected Variables in
Training Program Characteristics (n = 37) (Manager survey and Senior manager survey)
1 2 3 4 5 6 7 8 9 10 Mean SD
1. Proportion of trainees 1.000 -.242 -.165 .044 .078 .076 .066 -.366* .083 -.282 .50 .396
among employees
2. Job specifica 1.000 .274 -.145 -.150 -.105 -.006 .573** -.372* .194 .71 .458
3. Intended goal Ib 1.000 -.279 -.765* .048 .181 .243 .019 .238 .44 .503
c
4. Intended goal II 1.000 -.267 .167 -.109 -.009 .189 -.227 .09 .288
5. Intended goal IIId 1.000 -.084 -.148 -.127 -.190 -.112 .42 .499
6. Entirely developede 1.000 .318* .133 .076 -.324 .78 .420
7. Level of 1.000 .127 -.124 -.075 5.67 2.663
customization
8. Type of training If 1.000 -.602** -.068 .51 .506
9. Type of training IIg 1.000 -.194 .26 .442
10. Perception of the 1.000 -.655 1.814
125

specific training
program effectiveness
a
0 = Job-related training program; 1 = Job-specific training program
b
0 = Intended goal is not to increase learning; 1 = Intended goal is to increase learning
c
0 = Intended goal is not to change behavior; 1 = Intended goal is to change behavior
d
0 = Intended goal is not to increase organizational performance; 1 = Intended goal is to increase organizational performance
e
0 = A training program was not entirely developed; 1 = A training program was entirely developed
f
0 = Type of training is not managerial; 1 = Type of training is managerial
g
0 = Type of training is not technical; 1 = Type of training is technical

Table 4.28: Summary Data: Regression of Increase in Operational Margin on Selected Variables in Training Program
Characteristics (Manager survey and Senior manager survey)
Variables R2 R2 change b Beta
Proportion of trainees among employees .024 .024 -6.307 -1.376
Job specific .048 .024 -13.283 -3.591
Intended goal .120 .072
Intended goal I -9.841 -2.380
Intended goal II -.932 -.145
Entirely developed .255 .135 -14.493 -3.504*
Level of customization .333 .078 1.384 2.072
Type of training .873 .540
126

Type of training I 9.213 2.491


Type of training II -9.252 -2.237*
(Constant) 15.138

Standard error = .366


*p<.10

Table 4.29: Standard Multiple Regression of Increase in Operational Margin on Selected Variables in Training Program
Characteristics (n = 10) (Manager survey and Senior manager survey)
Research Question Five: Are perceived training program effectiveness and client

organization financial performance (operational margin) related? Is perceived training

effectiveness a good indicator of financial performance? Or vise versa?

Correlational statistics were used to determine the relationship between

perceived training program effectiveness and the increase in operational margin for the

client’s organization. Table 4.30 presents the results of the correlation statistics. The

correlation coefficient between the perception of the specific training program

effectiveness and perception of the relative training effectiveness is .429, and is

statistically significant (p<.05). It means that perception of this specific training program

can predict perception of relative training program effectiveness.

However, the correlation coefficient between the perception of this specific

training effectiveness and the increase in operational margin is -.283, and is not

significant (p<.10). The correlation coefficient between the perception of the relative

training effectiveness and the increase in operational margin is -.074, and is not

significant (p<.10). These imply that there are no significant relationships between

perceived training program effectiveness and the increase in operational margin.

Thus, the results show that there is no relationship between perceived training

program effectiveness and client organization financial performance.

127
1 2 3

1. Perception of this training program 1.000 .429** -.283

2. Perception of relative effectiveness 1.000 -.074

3. Increase in operational margin 1.000

**p<.01

Table 4.30: Correlation Matrix between Perceptions on Training Program Effectiveness


and Increase in Operational Margin (Manager survey and Senior manager survey, n=43)

128
CHAPTER 5

SUMMARY, DISCUSSION, AND IMPLICATIONS

This chapter is composed of three sections. The first section summarizes the

results of the data analysis and describes findings of five research questions. The second

section explores to further discussion based on the findings. The last section proposes a

revised conceptual framework and provides implications for workforce development

policy makers and HRD researchers as well as practitioners.

Summary of Findings

The purpose of this study was to describe the relationship between training

program characteristics and training effectiveness among organizations receiving services

from external training providers. The study was limited to 202 organizations that received

training funds from the OITP and implemented training program from January 2002 to

June 2004. Because 77 organizations were reclassified not in sample, the total number of

organizations in the sample frame was 125. The number of organizations that completed

both manager and senior manager surveys was 45, and thus, the response rate is 36

percent. The results showed that the three groups, responses, non-responses, and not-in-

the sample frame groups, are not different in terms of organization’s size, type of industry,

129
geographic locations, number of training participants, and average of base hourly wage of

trainees.

Most organizations are either privately owned companies or subsidies of privately

owned companies. Most companies are manufacturing but vary having less than 10

employees to having more than 10,000 employees.

Research Question One: Does training program effectiveness differ based on types of

training providers? Does the training provided by community colleges and/or four-year

universities result in a higher degree of training program effectiveness in comparison to

other external and/or internal training providers?

The study identified various entities involved in training process among client

organizations receiving funds from the OITP. Less than a half of business organizations

developed and delivered training programs without an external training provider’s

assistance. More than a half of business organizations received training related services

from external training entities. The majority of external training providers are private

training vendors and private equipment manufacturers. It is found that there are a few

cases where educational institutions were involved in the training process.

The results of the study showed that training program effectiveness does not differ

based on the type of training provider. Client organizations do not perceive that who

developed training programs impacted on training program effectiveness. The study

found that client organizations do not perceive that who delivered training programs

made training program effectiveness varied. Who developed training programs and who

delivered training programs were found to not impact the increase in operational margin.

130
It was found that the training provided by community colleges or four-year

universities does not result in a higher degree of perceived training program effectiveness

in comparison to other external and internal training providers. It shows that client

organizations perceived training program effectiveness does not vary based on different

training providers.

However, the study found that client organizations who received training services

from private training providers experienced a greater increase in operational margin than

client organizations who did not received training services from private training providers.

It cannot be concluded that private training provider’s involvement increases the increase

of operational margin of client organizations, but it can be concluded that the operational

margin of companies who received training services from private training providers

increases more than companies who do not received training services from private

training providers.

Research Question Two: Does the degree of training program effectiveness differ based

on the nature of the relationship among client organizations and external training

providers?

The results showed that the degree of training program effectiveness relates to

the fact that clients organizations agreed on training programs and formally contracted

with external training providers. Although the external training provider’s level of

involvement in the training program was not shown to relate to perceived training

effectiveness, it showed that the more external training providers were involved in the

training process, the more operational margin increase. In addition, the results showed

131
that the more external training providers have business knowledge about client

organizations, the more client organization’s operational margin increases.

Research Question Three: Does the degree of training program effectiveness differ based

on the quality of provider’s training needs assessment?

The results of the study showed that training needs assessment has no

relationship with training program effectiveness. Training program effectiveness is

shown not to be different based on whether implemented a training needs assessment. In

addition, it is found that the quality of training needs assessment does not relate to

training program effectiveness.

Research Question Four: How do training programs vary in terms of the extent of

customization, type of training, relationship to job, proportion of participants verses total

employees, and expected outcome? How do these differences in the nature of the training

program impact training program effectiveness?

The results showed that training programs vary in terms of the extent of

customization, type of training, relationship to job, proportion of participants verses total

employees, expected outcome. Almost half of training is technical training, almost one

quarter of training is managerial training, and the other quarter is awareness training. The

proportion of training participants verses total employees also varies from less than 10

percent to 100 percent. Most training programs are job-specific. Almost half of the

training program’s expected outcomes are related to employee development, and another

half of expected outcomes are related to organizational performance.

132
In terms of customization, all training programs have customized at least one part

of the training program except one program. The majority of training programs were

entirely developed for specific training.

The results showed that the fewer employees participated in training programs,

senior managers perceived the training program was more effective. It also showed that

when the training program is specific to jobs, senior managers perceived the training

program was more effective than when the training is simply related to jobs. It is found

that when the training program was entirely developed, senior managers perceived that

the training program was more effective than when the program was not newly developed.

The training program that was entirely developed also predicts more increase of

operational margin than the program that was not newly developed.

Research Question Five: Are perceived training program effectiveness and client

organization’s financial performance (operational margin) related? Is perceived training

effectiveness a good indicator of financial performance? Or vise versa?

The result of the study showed that perceived training program effectiveness

does not relate to client organization’s financial performance in terms of an increase in

operational margin. Perceived training effectiveness is not an indicator of financial

performance, and the increase in operational margin also cannot be an indicator of

perception of training effectiveness.

Discussion

As described in the previous section, some of the study results presented different

results from the proposed conceptual framework, while some of the results partially

133
supported the conceptual framework. This section discusses some possible interpretation

of the results and presents an explanation of the findings.

Perceived Training Effectiveness and Organizational Financial Performance

The result showed that senior managers in client organizations perceive training

effectiveness as not related to an increase in operational margin. This result can be

interpreted in a couple of ways. First, senior managers in client organizations may not be

aware of financial performance of their organizations or they may not evaluate training

effectiveness from organizational financial performance perspective. In other words,

senior managers may perceive training effectiveness independently from organizational

financial performance. In assessing training effectiveness, their perception may be

influenced by their belief about the provider’s relationship as denoted by Kirkpatrick’s

first three levels, rather than financial performance.

Second, other measurements may be more appropriate to reflect training impact

on organizational performance besides the increase of operational margin. For example,

business survival itself can be an evaluation criterion to measure organizational

performance. Since most organizations are manufacturing in the study sample, and since

many manufacturing companies fail these days, organization’s existence might mean a

success. Another example is the change of number of employees. When organizations

expected to grow business, they might hire more employees to supply market demands. It

might result in reducing annual operational margin, though organizations experienced

short-term increase of operational margin due to increase of productivity.

Another possible explanation is that senior managers may be less obligated from a

financial perspective from OITP funded training since this training is publicly funded.

134
External Training Providers and Training Effectiveness

This study investigated two different types of external training providers,

educational institutions and private training providers. It particularly sought for

advantages of partnership training with educational institutions because this partnership

training might be able to be more encouraged due to the synergy of public funds. The

OITP fund is public fund, so the organizations spent public funds for their training. At the

same time, many community colleges, four-year universities, and vocational training

centers also run based on public support. Therefore, research (GAO, 2004) on state-

funded, employer-based training asserted that state governments should encourage

partnership training with state-supported educational institutions to synergize public

funds utility. However, the result of this study showed that only a couple of organizations

engaged with educational institutions for their training. Therefore, it is difficult to test

training effectiveness with external training provider of educational institutions, though

previous studies asserted the advantage of partnership training with educational

institutions.

On the contrary to the findings of partnership training with educational

institutions, partnership training with private training providers produced different results.

First, the majority of organizations hire private training vendors or equipment

manufacturers as their external training providers. Furthermore, partnership training with

private training providers is identified to bring more increase of operational margin to

client organizations, though the number of sample case is small. Because it is very hard

to conclude that the increase of operational margin can be a reliable measurement for

organizational performance in this sample, as it is explained in the previous section, it

135
cannot be concluded that partnership training with private training providers directly

impacts organizational performance more than non-partnership training or partnership

training with educational institutions. Rather, from this result, it can be interpreted that

private training providers including equipment manufactures may be more accessible,

more affordable, or more effective to client organizations in implementing training.

Another interpretation is that private training providers may focus more on client’s

organizational issues. Although community colleges or universities may focus on client’s

organizational issues, they are more guided by an educational focused mission.

Relationship with External Training Providers

In addition, the result of this study identified a way to evaluate partnership

training relationship, like training provider’s level of involvement in training process and

training provider’s knowledge in client organization’s business. Based on previous

studies, it is not surprising that training provider’s level of involvement and knowledge

was positively correlated with training effectiveness

However, in contrast to previous studies (Kenis and Knoke, 2002), formal

contracting was negatively correlated with training effectiveness. This result may be

explained by the small number of case engaging with external training providers in the

sample.

Training Needs Assessment and Training Effectiveness

The result of this study showed that training needs assessment did not relate to

training effectiveness of the companies that received training funds from the OITP,

contrary to the results of previous research on needs assessment. Since the study

represents one setting, it’s difficult to generalize the results to other contexts. There are

136
two possible interpretations to explain this fact. First, whether to implement training

needs assessment or not might not impact on training effectiveness. However, the quality

of training needs assessment might impact on training effectiveness, and the study

instrument might have failed to measure this quality in an extensive level enough to

identify difference in training impact on training effectiveness.

Another possible interpretation is because the management who made decisions

about the training programs for the company might be already aware of training needs for

the company. Therefore, the training needs assessment did not influence on decision

making about training, and thus, training needs assessment did not impact on training

effectiveness. Because, in this case, training needs are assessed and recognized

informally, formal training needs assessment may not be more effective than informal

needs assessment. In another words, training needs assessment currently implemented

might not performed with a good quality. It might assess training needs perfunctorily.

Nature of the Training Program

Although the number of items customized was identified not to relate to training

effectiveness, the result of the study showed that the training programs that were entirely

developed showed a large increase in operational margin at client organizations. Because

most training programs customized at least some instructional materials such as company

data, format, materials, and equipment, the extent of customization may not be related to

training effectiveness.

In terms of training duration, the distribution was bi-modal with the average of

13.48 weeks. Since the results of study do not provide any information regarding

intensity of training program—how many hours per week—, the amount of training

137
duration has limits to be tested for training effectiveness. Another limit is that the

absolute number of case is too small in certain variables to apply statistical analysis. This

study also cannot eliminate other training program’s impact besides the OITP funded one.

Implications

The first part of this section presents implications for future research including a

revised conceptual framework. The second part provides several implications for practice

and policy in HRD area as well as workforce development area.

Implications for Future Research

This study studied training effectiveness at organizational performance level

using senior managers’ perception about training effectiveness and financial measures. It

focused on the increase in operational margin that reflects operational efficiency within

client organizations. Although both measurements—perceived training effectiveness and

the increase in operational margin—were designed to reflect one specific training

program, they should have had correlations. However, since they did not have any

relationship, it is very critical to investigate whether senior managers or HRD/training

managers are interested in relationship between training effectiveness and organizational

performance. It is critical to see how well managers in HRD/training understand the

potential of training impact on organizational performance improvement, in particular,

financial performance. Without their recognition, it is not possible for organizations to

expect the increase of organizational performance through training. Thus, the research on

training should emphasize identifying current managers’ knowledge and attitude toward

training evaluation relating organizational performance based on this study results.

138
From this study, it was found the need to develop varied measurements to assess

training effectiveness from organizational performance perspective. Although it is

important to identify measurements to assess training effectiveness across industries, it

might be more important to identify and apply appropriate measurements to assess

training effectiveness based on type of industry and/or type of training. From this respect,

the study on training evaluation should be conducted further toward contributing to assess

organizational level of performance. This study provided basis to develop measurements

to assess training effectiveness from organizational level of performance.

This study also identified needs to further study in partnership training. First, it

needs to investigate advantages and disadvantages of partnership training with

educational institutions of training provider. To do this study, it might be better to select

cases where partnership trainings with educational institutions were used. Second, it

needs to study practice of partnership training with private training providers. This study

identified business organizations frequently utilize private training providers. It needs to

study further about why private training providers are preferred. It is also important to

study these reasons are relate to organizational level of performance.

Last, it is essential to identify more variables that impact external training

providers and, ultimately, impact training effectiveness to increase organizational

performance. This study identified external training provider’s level of involvement in

training process and the provider’s knowledge about client organization’s business.

However, the study failed to clearly identify how the contractual relationship between

client organizations and external training providers impact on training effectiveness. If

the contractual relationship might impact on training effectiveness, it is very critical to

139
identify other contractual components. It might be more appropriate if qualitative analysis

is emerged to investigate the relationship. In addition, this study attempted to examine the

quantity of interaction with external training providers. A case study is recommended to

study the quality of interaction in addition to the quantity of interaction, to more clearly

understand the impact of the relationship between external training providers and training

effectiveness.

In terms of training needs assessment, this study showed that the fact to

implement training needs assessment does not relate to training effectiveness. Thus,

rather to identify whether training needs assessment is implemented or not, it might be

more meaningful to study the quality of training needs assessment. Qualitative research

methods can be used to identify determining factors of quality. Since the quality of

current training needs assessment is suspicious poor, it is important to study the quality of

needs assessment. In addition, the study on training needs assessment should investigate

the process to reflect the results of training needs assessment on training decision and

whether the results of training needs assessment change or impact on training decision in

reality.

The revised conceptual framework will guide future research. Based on the

findings from the data analysis, discussion, and literature review, the proposed conceptual

framework is revised. Figure 5.1 presents the revised conceptual framework.

140
Types of Training Provider
• Internal staff
• Educational institution
• Private training vendor
Nature of the Provider-Client
Relationship
• Level of involvement
• Knowledge of the business
• Formality

Training Program Effectiveness


• Perception
Provider’s Training Needs • Financial performance
141

Assessment
• Formality
• Quality

Nature of the Training Program


• Entire development with
customization
• Relationship to job

Figure 5.1: Revised Conceptual Framework


Implications for Practice and Policy

Lesson for Business Organization

One of the contributions of this study to the business is to call attention to the

relationship between training and business performance. The results of this study implied

that managers in charge of training might not be aware of the impact of training on

organization’s financial performance or might not know the way to evaluate training

effectiveness from organizational level of performance, and thus, it might cause

ineffective training. It is very much critical to develop, deliver, and evaluate training

programs based on expectation of organizational level of performance, which also can be

fundamental intention for training. By doing this, appropriate training programs will be

able to obtain the rational of training, not as cost but as investment. Organizations also

will be able to assess training effectiveness from organizational expectation as well as

returns on investment.

This study also draws attention of business to training needs assessment. The

results of this study showed that it is very much possible for training needs assessment to

be poorly performed in the real world. Thus, organizations should realize that the

possibility of poor quality of results in training needs assessment. They should carefully

conduct training needs assessment or control the quality of training needs assessment

when it is performed by external training providers.

Lesson for Workforce Development Policy and Practice

This study is one of the few studies to investigate state-funded, employer based

training, and the first attempt to examine the impact of the OITP funds on businesses in

Ohio. This study presents implication on assessing state-funded, employer based training

142
effectiveness, but also on assisting businesses have more effective training practice. The

results of this study provide several considerations that the state government should think

about to in assisting business within the state.

The state government might need to carefully examine the current OITP process

to utilize the funds more effectively. The study showed that the OITP funds were mainly

granted to private owned manufacturing companies with less than 300 employees. Is this

group of companies what the state government originally targeted? The state government

first can develop state economic development strategies and then can review the

characteristics of the OITP fund recipient companies to see whether her original intention

was met or not. Based on the results of this study, the state government is able to target a

certain group of business.

In addition, the study showed that there is very weak linkage between state-

supported educational institutions and state-funded programs. The results of this study

raise the attention of policy makers to develop policy toward creating synergy among

local economy needs, capability of workforce, and higher educational functions. Thus,

the OITP can be processed in cooperation with higher education policies.

This study also showed that the OITP recipient organizations do not evaluate

training effectiveness with financial measures. Based on OITP documents, the OITP do

not evaluate the impact of OITP from a financial performance perspective but did gain

data on the projected increase in the number of employees after training. It may be

concluded that the impact of OITP and OITP evaluation outcome are not monitored. The

OITP needs to establish evaluation criteria to monitor OITP impact to ensure training

funds are focused on improving organizational performance.

143
Lesson for Higher Education Practice

The results of this study draw attention of workforce development practitioners in

higher education on their practice of partnership training. Community colleges and four-

year universities might have developed their own market to provide training and

vocational education services. However, the results of this study presented there are

another potential market that higher education can penetrate. By enforcing partnership

training with private business organizations, higher educations will be able to obtain

several benefits for their institutions such as developing up-to-date curriculum to meet

current business and utilizing existing training staff, facilities, and curriculum

continuously.

144
REFERENCES

Acemoglu, D. & Pischke, J. (1999). Beyond Becker: Training in imperfect labour


markets. The Economic Journal, 109, 112-142.

Ahlstrand, A. L., Bassi, L. J., & McMurrer, D. P. (2003). Workplace Education for Low-
Wage Workers. Kalamazoo, MI: W. E. Upjohn Institute for Employment
Research.

Allen, J. P. (2002). A community college partnership with an electrical contractor and


union. New Directions for Community Colleges(119), 31-36.

Alliger, G. M., Tannebaum, S. I., Bennett, W., Traver, H., & Shotland, A. (1997). A
meta-analysis of the relations among training criteria. Personnel Psychology, 50,
341-358.

Altschuld, J. W. (2003). Evaluation methods: Principles of Need Assessment.


Unpublished manuscript, The Ohio State University at Columbus.

Anderson, E., Fornell, C., & Lehmann, D. (1994). Customer satisfaction, market share,
and profitability: Findings from Sweden. Journal of Marketing, July, 53-66.

Ary, D., Jacobs, L. C., & Razavieh, A. (2002). Introduction to Research in Education
(Vol. Sixth Edition). Belmont, CA: Wadsworth/Thomson Learning.

Aslanian, C. B. (1988). Partnerships for training: Putting principles into action. In D. R.


Powers, M. F. Powers, F. Betz & C. B. Aslanian (Eds.), Higher Education in
Partnership with Industry: Opportunities and Strategies for Training, Research,
and Economic Development (pp. 243-278). San Francisco: Jossey-Bass.

Bailey, J. A. (1995). Forming professional/education partnership. Management


Accounting, 76(11), 24-29.

Banker, R. D., Potter, G., & Srinivasan, D. (2000). An empirical investigation of an


incentive plan that includes nonfinancial performance measures. The Accounting
Review, 75(1), 65-92.

Barron, J. M., Berger, M. C., & Black, D. A. (1999). Do workers pay for on-the-job
training? Journal of Human Resources, 34, 235-252.

Becker, G. (1964). Human Capital. Chicago: The University of Chicago Press.

145
Borjas, G. (2000). Human Capital. Labor Economics, 226-261.

Bowie, N. E. (1994). University-Business Partnership: An Assessment. Lanham, MD:


Rowman &Littlefield Publishers, Inc.

Bragg, D. D. (2001). Opportunities and challenges for the new vocationalism in


American community colleges. In D. D. Bragg (Ed.), The New Vocationalism in
Community Colleges (pp. 5-16). San Francisco: Jossey-Bass.

Bragg, D. D. & Jacobs, J. (1993). Establishing an operational definition for customized


training. Community College Review, 21(1), 15-26.

Brinkerhoff, R. O., & Gill, S. J. (1994). the learning alliance: Systems thinking in human
resource development. San Francisco: Jossey-Bass.

Brown, J. (2002). Training needs assessment: A must for developing an effective training
program. Public Personnel Management, 31(4), 569-580.

Brown, P. (1999). Client-based management qualifications: A case of win-win? Journal


of Management Development, 18(4), 350-361.

Campbell, T. I. & Slaughter, S. (1999). Faculty and administrators' attitudes toward


potential conflicts of interest, commitment, and equity in university-industry
relationships. The Journal of Higher Education, 70(3), 309-352.

Cappelli, P., Bassi, L. J., Katz, H., Knoke, D., Osterman, P., & Useem, M. (1997).
Change at Work. New York: Oxford University Press.

Carnevale, A. P. (1998). Education and Training for America's Future. Washington D.C.:
The Manufacturing Institute.

Chan, T. S. (1994). Developing international managers: A partnership approach. The


Journal of Management Development, 13(3), 38-51.

Damodaran, A. (1999). Applied corporate finance. New York: John Wiley & Sons

Dare, C. (1999). Spotlight. Journal of European Industrial Training, 23(9), 446.

Demb, A. (2003). The administration of academic affairs in higher education. Columbus,


OH: The Ohio State University.

Dillman, D. A. (2000). Mail and internet surveys: The tailored design method (Second
ed.). New York: John Wiley & Sons.

Eastmond, N. (1994). Assessing needs, developing instruction, and evaluating results. In


B. Willis (Ed.), Distance Education: Strategies and Tools (pp. 87-107).
Englewood Cliffs, NJ: Educational Technology.

146
Eaton, S. C. (2003). If you can use them: Flexibility policies, organizational commitment,
and perceived performance. Industrial Relations, 42(2), 145-167.

Eccles, R. (1991). The performance measurement manifesto. Harvard Business Review,


January-February, 131-137.

Ellis, N. & Moon, S. (1998). Business and HE links: The research for meaningful
relationships in the placement marketplace-part two. Education & Training, 40(9),
390-397.

GAO. (2004). Workforce training: Almost half of state fund employment placement and
training through employer taxes and most coordinate with federally funded
programs (No. GAO-04-282). Washington, D.C.: United State of General
Accounting Office.

Gold, J., Whitehouse, N., & Hill, M. (1998). "If the CAPS fit...": Learning to manage in
SMEs. Education & Training, 40(6/7), 321-327.

Goldberg, M. & Ramos, L. (2003). Trainer's task: Do more with less. Pharmaceutical
Executive, 110-118.

Goldstein, I. L. (1993). Training in Organizations: Needs Assessment, Development, and


Evaluation (Third ed.). Pacific Grove, CA: Brooks and Cole.

Golfin, P. A., White, J. D., & Curtin, L. A. (1998). A role for community colleges in Navy
training (No. CRM-97-97). Alexandria, VA.: Center for Naval Analysis.

Gorman, P., McDonald, B., Moore, R., Glassman, A., Takeuchi, L., & Henry, M. (2003).
Custom Needs Assessment for Strategic HR Training: The Lost Angeles County
Experience. Public Personnel Management, 32(4), 475-496.

Gray, G. R., Hall, M. E., Miller, M., & Shasky, C. (1997). Training practices in state
government agencies. Public Personnel Management, 26(2), 187-202.

Gray, K. C. & Herr, E. L. (1998). Workforce Education: The Basics. Boston, MA: Allyn
and Bacon.

Grubb, W. N. & Stern, D. (1989). Separating the wheat from the chaff: The role of
vocational education in economic development. Berkeley, CA: National Center
for Research in Vocational Education.

Hagen, R. (2002). Globalization, university transformation and economic regeneration: A


UK case study of public/private sector partnership. The International Journal of
Public Sector Management, 15(3), 204-218.

Hair, J. F., Anderson, R. E., Tatham, R. L., & Black, W. C. (1998). Multivariate data
analysis (5th ed.). Upper Saddle River, NJ: Prentice Hall.

147
Hall, Z. W. & Scott, C. (2001). University-industry partnership. Science(291), 553-555.

Ham, J. C. (1994). Looking into the black box: Using experimental data to find out how
training works. International Journal of Manpower, 15(5), 32-37.

Hanna, J. B. (2001). The academic-practitioner interface revised: A demonstrative


extension to transportation and marketing. American Business Review, 19(1), 109-
117.

Hardingham, A. (1996). Improve an inside job wih an outside edge. People Management,
2(12), 45-47.

Hawley, J. D., Sommers, D., & Melendez, E. (2003a). The Earnings Impact of Adult
Workforce Education in Ohio. Paper presented at the Networking and Best
Practices in Workforce Development, The Ford Foundation.

Hawley, J. D., Sommers, D., & Melendez, E. (2005). The impact of institutional
collaborations on the achievement of workforce development performance
measures in Ohio. Adult Education Quarterly (forthcoming).

Hodson, R., Hooks, G., & Rieble, S. (1992). Customized training in the workplace. Work
and Occupations, 19(3), 272-292.

Ittner, C. & Larcker, D. (1998). Are nonfinancial measures leading indicators of financial
performance? An analysis of customer satisfaction. Journal of Accounting
Research, 36, 1-35.

Jacobs, R. L. (1999). Partnership research: Ensuring more useful HRD collaboration.


Paper presented at the Academy of Human Resource Development Annual
Conference, Baton Rouge, LA.

Jacobs, R. L. (2001). Managing employee competence in global organizations. In J. Kidd


(Ed.), Maximising Human Intelligence Development in Asia. London: Palgrave
Press.

Jacobs, R. L. (2003). Structured on-the-job training (2nd ed.). San Francisco: Berrett-
Koehler.

Jacobs, R. L. & Hawley, J. D. (2003, March). Workforce development: Definition and


relationship with human resource development. Paper presented at the Annual
Conference of the Academy of Human Resource Development. Minneapolis, MN.

Jacobs, R. L. & Washington, C. (2003). Employee development and organizational


performance: A review of literature and directions for future research. Human
Resource Development International, 6(3), 343-354.

148
Jamrog, J. (2004). The perfect storm: The future of retention and engagement. Human
Resource Planning, 27(3), 26-34.

Johnston, G. H. (2001). Work-Based Learning: Finding a new niche. In D. D. Bragg (Ed.),


The New Vocationalism in Community Colleges (pp. 73-80). San Francisco:
Jossey-Bass.

Johnstone, D. B. (1994). College at work: Partnerships and the rebuilding of America.


The Journal of Higher Education, 65(2), 168-185.

Kalleberg, A. L., & Van Buren, M. E. (1996). Organizational Differences in Earnings. In


A. L. Kalleberg, D. Knoke, P. Marsden, V. & J. L. Spaeth (Eds.), Organizations
in America: Analyzing their structures and human resource practices (pp. 200-
213). Thousand Oaks, CA: Sage Publications.

Kaufman, R. & Keller, J. M. (1994). Levels of evaluation: Beyond Kirkpatrick. Human


Resource Development Quarterly, 5(4), 371-380.

Kaufman, R., Rojas, A. M., & Mayer, H. (1993). Needs Assessment: A User's Guide.
Englewood Cliffs, NJ: Educational Technologies Publications.

Keithley, D. & Redman, T. (1997). University-industry partnerships in management


development: A case study of a "world-class" company. The Journal of
Management Development, 16(3), 154-166.

Kenis, P. & Knoke, D. (2002). How organizational field networks shape


interorganizational tie-formation rates. Academy of Management Review, 27(2),
275-293.

Kirkpatrick, D. L. (1996). Great ideas revisited technique for evaluating training


programs: Revisiting Kirkpatrick's four-level model. Training & Development,
50(1), 54-59.

Knoke, D. (1997). Job training programs and practices. In P. Cappelli, L. Bassi, H. Katz,
D. Knoke, P. Osterman & M. Useem (Eds.), Change at Work (pp. 122-153). New
York: Oxford University Press.

Knoke, D., & Janowiec-Kurle, L. (1999). Make or buy? The externalization of company
job training. Research in the Sociology of Organizations, 16, 85-106.

Kraiger, K., Ford, J. K., & Salas, E. (1993). Application of cognitive, skill-based, and
affective theories of learning outcomes to new methods of training evaluation.
Journal of Applied Psychology, 78, 311-328.

Leach, M. P. & Liu, A. H. (2003). Investigating interrelationships among sales training


evaluation methods. Journal of Personal Selling & Sales Management, 23(4),
327-240.

149
Lincoln, J. R. & Kalleberg, A. L. (1990). Culture, control, and commitment: A study of
work organization and attitudes in the United States and Japan. New York:
Cambridge University press.

Lumby, J. (1998). Restraining the further education market: Closing Pandora's box.
Education & Training, 44(2/3), 57-62.

Lupton, R. A., Weiss, J. E., & Peterson, R. T. (1999). Sales training evaluation model
(STEM): A conceptual framework. Industrial Marketing Management, 28(1), 73-
86.

Lynch, L. (1992). Private sector training and the earnings of young workers. American
Economic Review, 82, 299-312.

Lynch, R., Palmer, J., & Grubb, W. N. (1991). Community College Involvement in
Contract Training and Other Economic Development. Berkeley, CA: National
Ceter for Research in Vocational Education, University of California.

Mathieu, J. E. & Zajac, D. (1990). A review and meta-analysis of the antecedents,


correlates, and consequences of organizational commitment. Psychological
Bulletin, 108(2), 171-194.

Mavin, S. & Bryans, P. (2000). Management development in the public sector: what roles
can universities play? The International Journal of Public Sector Management,
13(2/3), 142-152.

McClelland, S. B. (1994a). Training needs assessment data-gathering methods: Part1,


survey questionnaires. Journal of European Industrial Training, 18(1), 22-26.

McClelland, S. B. (1994b). Training needs assessment data-gathering methods: Part 2,


individual interviews. Journal of European Industrial Training, 18(2), 27-31.

McClelland, S. B. (1994c). Training needs assessment data-gathering methods: Part 3,


focus groups. Journal of European Industrial Training, 18(3), 29-32.

McClelland, S. B. (1994d). Training needs assessment data-gathering methods: Part 4,


on-site observations. Journal of European Industrial Training, 18(5), 4-7.

McMurtrie, B. (2001, May 25). Community colleges become a force in developing


nations worldwide. Chronicle of Higher Education, 47, A44-46.

Miller, L. C. & Hustedde, R. J. (1987). Group Approaches. In D. E. Johnson, L. R.


Meiller, L. C. Miller & G. F. Summers (Eds.), Needs Assessment: Theory and
Methods (pp. 91-125). Ames, Iowa: Iowa State University Press.

150
Moore, D. E., Christenson, J. A., & Ishler, A. S. (1987). Large-Scale Surveys. In D. E.
Johnson, L. R. Meiller, L. C. Miller & G. F. Summers (Eds.), Needs Assessment:
Theory and Methods (pp. 142-155). Ames, Iowa: Iowa State University Press.

Moore, R. W., Blake, D. R., Phillips, G. M., & McConaughy, D. (2003). Training that
works: Lessons from California's employment training panel program.
Kalamazoo, Michigan: Upjohn Institute for Employment Research.

Normile, D. (1996). Japan hopes to case in on industry-university ties. Science,


274(5292), 1457-1458.

O'Rear, H. M. (2002). Performance-based training evaluation in a high-tech company.


Unpublished Ph.D, The University of Texas, Austin, Texas.

Osterman, P. (1995). Skill, training, and work organization in American establishments.


Industrial Relations, 34, 125-146.

Otala, L. (1994). Industry-university partnership: Implementing lifelong learning. Journal


of European Industrial Training, 18(8), 13-20.

Parrish, D. A. (1998). The new boom in client training. VARBusiness, 14(14), 91-94.

Pauley, D. (2001). Collaboration: The key to developing America's workforce.


Community College Journal of Research and Practice, 25, 5-15.

Pearce, J. A. I. (1999). Faculty survey on business education reform. The Academy of


Management Executive, 13(2), 105-109.

Phillips, J. J. (1990). Handbook of Training Evaluation and Measurement Measures.


London: Kogan Page.

Phillips, J. J. (1996). ROI: The search for best practices. Training & Development, 50(2),
42-27.

Porter, M. E. (1996). What is strategy? Harvard Business Review, 74(6), 61-78.

Powers, D. R. & Powers, M. F. (1988a). Cooperative approaches to education and


research. In D. R. Powers, M. F. Powers, F. Betz & C. B. Aslanian (Eds.), Higher
Education in Partnership with Industry: Opportunities and Strategies for
Training, Research, and Economic Development (pp. 31-75). San Francisco:
Jossey-Bass.

Powers, D. R. & Powers, M. F. (1988b). Benefits of cooperation between higher


education and industry. In D. R. Powers, M. F. Powers, F. Betz & C. B. Aslanian
(Eds.), Higher Education in Partnership with Industry: Opportunities and
Strategies for Training, Research, and Economic Development (pp. 3-30). San
Francisco: Jossey-Bass.

151
Prince, C. (2002). Developments in the market for client-based management education.
Journal of European Industrial Training, 26(7), 353-359.

Rajan, A. & Harris, S. (2003, September 9, 2003). What works, What doesn't, and why.
Personnel Today, 19-21.

Regional Technology Strategies, I. (1999). A comprehensive look at state-funded,


employer-focused job training programs: National Governors' Association Center
for Best Practices.

Rhodes, F. H. T. (2001). The creation of the future: The role of the American University.
Ithaca, NY: Cornell University Press.

Roessner, D., Ailes, C. P., Feller, I., & Parker, L. (1998). How industry benefits from
NSF's engineering research centers. Research Technology Management, 41(5),
40-44.

Roever, C. (2000). Mead Corporation's creative approach to internships: Success in a


unionized manufacturing plant. Business Communication Quarterly, 63(1), 90-
100.

Rossett, A. (1987). Training Needs Assessment. Englewood Cliffs, NJ: Educational


Technology Publications.

Rowley, D. J., Lujan, H. D., & Dolence, M. G. (1998). Strategic choices for the academy:
How demand for lifelong learning will re-create higher education. San Francisco:
Jossey-Bass.

Ryan, J. H. & Heim, A. A. (1997). Promotion economic development through university


and industry partnership. New Directions for Higher Education(97), 42-50.

Santoro, M. D. & Betts, S. C. (2002). Making industry-university partnership work.


Research Technology Management, 45(3), 42-62.

Sekowski, G. J. (2002). Evaluating training outcomes: Testing an expanded model of


training outcome criteria. Unpublished Ph. D. Dissertation, Depaul University.

Sleezer, C. M. (1993). Training needs assessment at work: A dynamic process. Human


Resource Development Quarterly, 4(3), 247-264.

Sole, S. (1999). Customized training. NZ Business, 13(1), 24-29.

Starbuck, E. (2001). Optimizing university research collaborations. Research Technology


Management, 44(1), 40-44.

Stewart, J. (1999). Employee Development Practice. London: Pitman.

152
Swanson, R. A. & Holton, E. f. (2001). Foundations of human resource development.
San Francisco: Berrett-Koehler.

Thacker, R. A. (2002). Revising HR curriculum: An academic/practitioner partnership.


Education & Training, 44(1), 31-39.

Van Buren, M. E. & Erskine, W. (2002). The 2002 ASTD state of the industry report.
Alexandria, VA: American Society of Training and Development.

Warr, P. B. & Bunce, D. (1995). Trainee characteristics and the outcomes of open
learning. Personnel Psychology, 48(2), 347-375.

Weinberg, S. L. (2002). Data analysis for the behavioral sciences using SPSS.
Cambridge, UK: Cambridge University Press.

Wells, S. J. (1999). Novices fill technology gaps. HRMagazine, 44(12), 74-78.

Witkin, B. R. & Altschuld, J. W. (1995). Planning and Conducting Needs Assessments: A


Practical Guide. Thousand Oaks, CA: Sage Publications.

Wright, P. C. & Geroy, G. D. (1992). Needs analysis theory and the effectiveness of
large-scale government sponsored training programs: A case study. Journal of
Management Development(1), 1-27.

Yasin, M. M., Czuchry, A. J., Martin, J., & Feagins, R. (2000). An open system approach
to higher learning: The role of joint ventures with business. Industrial
Management + Data Systems, 100(5), 227-233.

153
APPENDIX A
TRAINING PROGRAM AND A RELATIONSHIP WITH TRAINIG PROVIDER
SURVEY QUESTIONNAIRE

154
155
156
157
158
159
160
161
162
163
164
APPENDIX B
TRAINING EFFECTIVENESS SURVEY QUESTIONNAIRE

165
166
167
168
169
170
171
APPENDIX C
COVER EMAIL SCRIPTS FOR BOTH SURVEYS

172
Cover Email Scripts for Manager Survey
 
Dear Mr._________ 
 
We are pleased that you will respond to this survey regarding the training programs 
funded by the Ohio Investment of Training Programs (OITP). The survey is conducted 
by the Ohio State University and supported by the OITP, Ohio Department of 
Development. You will find the support letter from the OITP as an attachment. 
 
It should take 15 minutes to respond to items in the questionnaire. Questions ask about 
the training program characteristics funded by OITP. All responses to this survey will be 
kept confidential. Only aggregate data will be reported in the study results. Your 
participation is voluntary and you can skip any questions that you do not wish to 
answer. 
 
Please complete the survey and submit it by March 28, 2005.   
 
If you have any questions, please contact Jeeyon Paek, the Ohio State University, via 
email paek.7@osu.edu or phone 614‐783‐7890. You may also contact Dr. Joshua Hawley, 
The Ohio State University (email: Hawley.32@osu.edu, phone: 614‐247‐6226) 
 
Thank you again in advance for your time and effort.   
 
Survey begins by clicking the following link 
      http://.paes.osu.edu/wde/~hrdmanagersurvey.html
 
Sincerely,
Jeeyon Paek

173
Cover Email Scripts for Senior Manager Survey
 
 
Dear Mr._________ 
 
We are pleased that you will respond to the survey regarding the training programs 
funded by the Ohio Investment of Training Programs (OITP). This survey is conducted 
by the Ohio State University and supported by the OITP, Ohio Department of 
Development. 
 
It should take 10 minutes to respond to items in the questionnaire. Questions ask about 
the training effectiveness at your company. All responses to this survey will be kept 
confidential. Only aggregate data will be reported in the study results. Your 
participation is voluntary and you may skip any questions that you do not wish to 
answer. 
 
Please complete the survey by March 30, 2005.   
 
If you have any questions, please contact Jeeyon Paek, The Ohio State University, via 
email paek.7@osu.edu or phone 614‐273‐0642. You may also contact Dr. Joshua Hawley, 
The Ohio State University (email: Hawley.32@osu.edu, phone: 614‐247‐6226)   
 
Thank you again in advance for your time and assistance.   
 
Survey begins by clicking the following link 
      http://.paes.osu.edu/wde/~hrdmanagersurvey.html
 
Sincerely, 
Jeeyon Paek 

174
APPENDIX D
SUPPORTING LETTERS FROM OITP FOR EACH SURVEY

175
176
177

You might also like