Professional Documents
Culture Documents
2002,ss
KEVIN D. CARLSON
Department of Management
Virginia Polytechnic Institute and State University
MARY L. CONNERLEY, ROSS L. MECHAM I11
Department of Management
Virginia Polytechnic Institute and State University
46 1
462 PERSONNEL PSYCHOLOGY
Recruitment Evaluation
Decomposing Recruitment
position and any individual that achieves a score at or above the cut score
is assumed to be qualified for hire.
The importanceof attractionoutcomes. Although each of the three re-
cruitment processes is important, attraction activities occur first and are
the most crucial for determining recruitment and staffing success. At-
traction activities establish the pool of applicants from which new hires
will eventually be chosen. If top candidates do not apply, an organiza-
tion has no chance of hiring them. Thus, the maximum potential value of
a recruiting cycle (i.e., the contribution the best applicants could make
to organizational effectiveness) is fixed once the applicant pool is es-
tablished. To assess attraction outcomes, the relevant applicant pool in
continuous recruitment approaches is all applicants of record at the time
of decision making that have not already been hired, irrespective of their
current status. An alternative is to limit the definition to only those ap-
plicants that applied during some specified period of time (e.g., the pre-
vious year).
The maximum potential in an applicant pool is realized only if the
best applicants in the pool are offered, and then accept, positions. Any
other outcome (is., the failure to retain top candidates in the pool or
to gain their acceptance of offers of employment) represents a loss of
potential value. Status maintenance and gaining job acceptance activ-
ities influence the effectiveness of recruitment by reducing or avoiding
the loss of potential that can occur when the best applicants do not join
organizations. However, neither status maintenance nor efforts to in-
fluence job choice can raise the potential contribution of a recruitment
cycle beyond what is present in the applicant pool. The same is true
of selection activities. Selection practices minimize losses of potential to
the extent they ensure hiring mistakes are minimized. The use of devices
with less than perfect validity (T = l.O), though, assures that some loss
will occur. But the best that even perfectly valid selection methods can
do is to identify the best applicants in the pool-it cannot make them bet-
ter. llius, the first priority of recruitment should be attracting the best
possible applicants because attraction outcomes establish the maximum
contribution that is possible in any staffing system. Even heroic efforts
in status maintenance, selection, gaining job acceptance, or employee
retention cannot overcome poor attraction outcomes.
like assessments of the job performance of new hires, are not useful
diagnostic tools; changes in these assessments cannot be aligned with
recruitment phases. As a result, whether poor recruitment outcomes
are due to not attracting top candidates, ineffective methods for keep-
ing candidates interested until offers can be made, an inability to sign
top candidates, or some combination of these factors, can not be deter-
mined. Knowing which outcomes are poor would allow an organization
to direct its improvement efforts to those areas with the greatest poten-
tial for improving overall effectiveness.
Attraction outcomes are assessed in some organizations. Currently,
the most common method for evaluating attraction outcomes is count-
ing the number of applicants generated. Fifty percent of organiza-
tions that reported performing formal assessments of recruitment in a
SHRM/CCH (1999) survey indicated they assessed number of applicants
generated for at least some positions. Number of applicants generated
is a simple and relatively inexpensive assessment that provides a use-
ful initial indicator of attraction effectiveness. It determines whether
sufficient numbers of applicants exist to fill open positions-a primary
recruitment goal. Failing to attract enough applicants to fill open posi-
tions results in recruitment failures. Recruitment failures are expensive,
potentially requiring duplication of recruitment costs and extending the
length of position vacancies.
The number of applicants attracted also provides an indirect assess-
ment of applicant quality. If more applicants are generated than there
are positions to be filled, organizations can be selective. They can offer
positions to individuals likely to be high performers, avoiding candidates
that are less likely to perform well on the job. The larger the applicant
pool, the more selective the organization can be, and increased selec-
tivity is assumed to result in better hires and subsequent increases in
organizational performance. However, in the case of applicants, more
is not always better. As the number of applicants increases so do the
costs of administering recruitment and selection systems. Larger appli-
cant pools have more applicants to track, correspond with, and screen,
raising costs and potentially extending the time required to fill vacant
positions. A tradeoff exists. Increasing the number of applicants is valu-
able as long as the gains from being more selective are large enough to
offset increased costs.
Brogden (1949) proposed that for a given number of hires, the gains
expected from greater selectivity appear to diminish as the size of the
applicant pool increases and, therefore, an optimal applicant pool size
exists for each recruiting event. That optimal applicant pool size is a
function of the rate of selectivity and cost is based on two assumptions:
(a) job performance potential is normally distributed within the appli-
KEVIN D. CARLSON ET AL. 467
A New Approach
TABLE 1
Validityof h m p l e s of Selection Devices that Could be used to
Initially Screen Applicants
Quality Measure
Each student applicant submitted a copy of his or her resume to be
reviewed by recruiters. Recruiters used this information to determine
which applicants would receive on-campus interviews. The resume was
the only information about the applicants available for making this ini-
tial screening decision. To develop applicant quality scores for our anal-
ysis, two different quality measures were considered-(a) a resume rat-
ing scale, similar to a weighted application blank and (b) the applicant’s
overall college grade point average (GPA). We chose to use GPA as the
applicant quality measure for this analysis. Anecdotal evidence suggests
that GPA is used as an initial screening tool by at least some organiza-
tions. In addition, unlike the resume rating scales, population norms for
GPA are known and meta-analytic estimates of the validity of GPA for
predicting job performance exist in the literature (i.e., rZy = .23; Roth
et al., 1996).
Form of the quality measure. Once individual quality score estimates
for all applicant pool members are developed, a question arises con-
cerning how individual quality score data should be aggregated to most
appropriately depict attraction outcomes. Hawk (1967) uses the mean
476 PERSONNEL PSYCHOLOGY
Utiliry Calculations
In order to use UA to convert differences in attraction outcomes to
a dollar metric, it was necessary to provide estimates of several values
required by Equation 1. These values are illustrative and the actual val-
ues used by organization decision makers would depend on the charac-
teristics of each recruitment event. In this example, we chose to hold
r x y , SDy, N , T and A C constant across all analyses. We set r x y = .23
based on the results of a meta-analysis of GPA validity data by Roth
et al. (1996). This is the estimated validity of GPA for predicting job
KEVIN D. CARLSON ET AL. 477
(i.e., ideal, moderate, and poor) and three levels of job acceptance per-
formance (i.e., ideal, moderate, and poor) are examined. These are de-
scribed in the note to Table 4. In each case the status maintenance ef-
fect is determined first, then the job choice effect is calculated on the
remaining applicants, allowing the unique impact of each process to be
examined. The right hand column is the cumulative effect of all three re-
cruitment processes shown as a deviation from the best possible outcome
(i.e., Job A attraction with ideal status maintenance and job acceptance).
These results show that even though the same combinations of status
maintenance and job acceptance performance are applied to all appli-
cant pooIs, the consequences they produce are very different. For exam-
ple, for Job A, which has the highest overall TOP15 score and the lowest
standard deviation of scores, even poor status maintenance and job ac-
ceptance had only minor consequences for utility. The outcomes are
somewhat different for Job D. Even though the applicant pool for Job D
contains top candidates whose quality scores are only slightly lower than
those for Job A, the greater variance in scores results in substantial re-
ductions in utility when status maintenance and job acceptance perfor-
mance are poor (i.e., $6,603.56). Job E demonstrates the full potential
impact of poor attraction. Under ideal status maintenance and job ac-
ceptance, the difference in utility between Jobs A and E is $4,473.59.
But if the difference in attraction outcomes is combined with poor sta-
tus maintenance and poor job acceptance, the difference balloons to
$11,976.94. This is substantial given that it is based on (a) hiring only
two associate engineers, (b) expectations that they will remain for one
year, and (c) a selection device with validity of only T = .23. Increasing
any of these inputs would increase these utility estimates.
REFERENCES
Boudreau JW, Rynes SL. (1985). Role of recruitment in staffing utility analysis. Journal of
Applied Pvcholom, 70,354-366.
Breaugh JA. (1992). Recruitment: Science andpractice. Boston: PWS-Kent.
Bretz RD, Judge TA. (1994). The role of human resource systems in job applicant decision
processes. Journal of Management, 20,531-551.
Brogden HE. (1949). When testing pays off. PERSONNELPSYCHOLOGY, 3, 133-154.
Cable DM, Aiman-Smith L, Mulvey PW, Edwards JR. (2000). The sources and accuracy
of job applicants’ beliefs about organizational culture. Academy of Management
Journal, 43,1076-1085.
Cable DM, Judge TA. (1996). Person-organization fit, job choice decisions, and organiza-
tional entry. Organizational Behavior and Human Decision Processes, 67,294-31 1.
Connerley ML, Rynes SL. (1997). The influence of recruiter characteristics and orga-
nizational recruitment support on perceived recruiter effectiveness: Views from
applicants and recruiters. Human Relations, 50(12), 1563-1586.
Cronbach LJ, Gleser GC. (1965). Psychological tests and personnel decisions, 2nd edition.
Urbana, IL: University of Illinois Press.
Davidson L. (1998). Measuring what you bring to the bottom line. Workforce, 77(9), 34+.
FitzEnz J. (1984) How to measure human resource management New York: McGraw-Hill.
Ford BD, Vitelli R, Stuckless N. (1996). The effects of computer versus paper-and-pencil
administration on measures of anger and revenge with an inmate population. Com-
puters in Human Behavior; I 2 ( 1 ) , 159-166.
Garman AN, Mortensen S. (1997). Using targeted outreach to recruit minority students
into competitive service organizations. College Student Journal, 31(2), 174-179.
Griffeth RW, Hom PW, Fink LS, Cohen DJ. (1997), Comparative tests of multivariate
models of recruiting sources effects. Journal of Management, 23, 19-36.
Grossman RJ. (2000). Measuring up: Appropriate metrics help HR prove its worth. HR
Magazine, 45( l), 28-35.
Hawk RH. (1967). The recruitment function. New York: American Management Associa-
tion.
Highhouse S, Stierwalt SL, Bachiochi P, Elder AE, Fisher G. (1999). Effects of advertised
human resource management practices on attraction of African American appli-
cants. PERSONNEL PSYCHOLOGY, 52,425-442.
Hunter JE. (1980). Validitygeneralizationfor 12,000jobs: An application ofvnthetic vulidiq
generalization to the General Aptitude Test Battery (CATB). Washington, DC: U. S.
Department of Labor. Employment Service.
Hunter JE, Hunter RF. (1984). Validity and utility of alternative predictors of job perfor-
mance. Psychological Bulletin, 96, 72-98.
Huselid MA. (1995). The impact of human resource management practices on turnover,
productivity, and corporate financial performance. Academy of Management Jour-
nal, 38,635672.
Judge TA, Bretz RD. (1992). Effects of work values on job choice decisions. Journal of
Applied Psychology, 77,261-271.
Judge TA, Cable DM. (1997). Applicant personality, organizational culture, and organi-
zation attraction. PERSONNEL PSYCHOLOGY, 50,359-394.
Lavigna RJ. (1996). Innovation in recruiting and hiring: Attracting the best and brightest
to Wisconsin state government. Public Personnel Management, 25(4), 423-437.
Lunz ME, Deville CW. (1996). Validity of item selection: A comparison of automated
computerized adaptive and manual paper and pencil examinations. Teaching nnd
Learning in Medicine, 8(3), 152-157.
Mason NA, Belt JA. (1986). Effectiveness of specificity in recruitment advertising. Journal
oJ Management, 12,425-432.
KEVIN D. CARLSON ET AL. 487
McDaniel MA, Schmidt FL, Hunter JE. (1988). A meta-analysis of the validity of methods
for rating training and experience in personnel selection. PERSONNEL PSYCHOLOGY,
41,283-314.
McDaniel MA, Whetzel DL, Schmidt FL, Mauer SD. (1994). The validity of employment
interviews: A comprehensive review and meta-analysis. Journal ofApplied Psychol-
ogy, 79,599-616.
Murphy KR. (1986). When your top choice turns you down: Effect of rejected offers on
the utility of selection tests. Psychological Bulletin, 99, 133-138.
Ones DS, Viswesvaran C, Schmidt FL. (1993). Comprehensive meta-analysis of integrity
test validities: Findings and implications for personnel selection and theories of job
performance. Journal of Applied Psychology Monograph, 78,679-703.
Roth PL, BeVier CA, Switzer FS 111, Schippmann JS. (1996). Meta-analyzing the rela-
tionship between grades and job performance. Jounzal of Applied Pychology, 81,
548-556.
Rothstein HR, Schmidt FL, Erwin FW, Owens WA, Sparks CP. (1990). Biographical data
in employment selection: Can validities be made generalizable? Journal ofApplied
P~ychology,75, 175-184.
Ryan AM, Sacco JM, McFarland LA, Kriska SD. (2000). Applicant self-selection: Corre-
lates of withdrawal from a multiple hurdle process. Journal of Applied Psychology,
85, 163-179.
Rynes SL. (1991). Recruitment, job choice, and posthire consequences. In Dunnette MD
(Ed.), Handbook of industrial and organuationalpsychology,2nd ed., (pp. 399-444).
Palo Alto, CA: Sage.
Rynes SL, Barber AE. (1990). Applicant attraction strategies: An organizational perspec-
tive. Academy of Management Review, 15,286-310.
Rynes SL, Boudreau JW. (1986). College recruiting in large organizations: Practice, eval-
uation, and research implications. PERSONNEL PSYCHOLOGY, 39,729-757.
Sackett PR, Ostgaard DJ. (1994). Job-specific applicant pools and national norms for
cognitive ability tests: Implications for range restriction corrections in validation
research. Journal of Applied Psychology, 79,680-654.
Saks AM, Wiesner WH, Summers RJ. (1996). Effects of job previews and compensation
policy on applicant attraction and job choice. Journal of Vocational Behavior; 49,
68-85.
Schmidt FL, Hunter JE. (1998). The validity and utility of selection methods in personnel
psychology: Practical and theoretical implications of 85 years of research findings.
Pbychological Bulletin, 124, 262-274.
Schmidt FL, Hunter JE, McKenzie RC, Muldrow TW. (1979). Impact of valid selection
procedures on workforce productivity. Journal of Applied Psychology, 64,609-626.
Segall DO, Moreno KE. (1999). Development of the computerized adaptive testing
version of the armed services vocational aptitude battery. In Drasgow F, Olson-
Buchanan JB (Eds), Innovations in computerized assessmenf (pp. 35-65). Mahwah,
NJ: Erlbaum.
SHRMKCH (1999, Summer). Human resources management: Ideas and trends in person-
nel. St. Petersburg, FL: CCH Incorporated.
Stevens CK. (1997). Effects of preinterview beliefs on applicants’ reactions to campus
interviews. Academy of Mnnagement Journal, 40,947-966.
Terpstra DE, Rozell EJ. (1993) The relationship of staffing practices to organizational level
measures of performance. PERSONNEL PSYCHOLOGY, 46,27-48.
Turban DB, Campion JE, Eyring AR. (1995). Factors related to job acceptance decisions
of college recruits. Journal of Vocational Behavios 47(2), 193-213.
Vispoel WP, Boo J, Bleiler 7: (2001). Computerized and paper-and-pencil versions of
488 PERSONNEL PSYCHOLOGY
APPENDIX A
AssessingAttraction Outcomes: A Step-by-stepOverview
Step 1. Identify positions to assess.
Description: Organizations may choose not to develop scores to eval-
uate recruitment outcomes for all positions. Those positions where as-
sessment of attraction outcomes is likely to be of greatest value are jobs
that generate several new hires and attract large numbers of applicants.
Step 2. Identify current screening mechanism and determine current
assessment properties.
Description: Organizations need to identify the current selection mech-
anism. in some organizations, screening devices are not formalized. In
these cases, efforts should be made to identify the true character of the
screening device as it currently exists.
Step 3. Determine strategy for adapting current screening device to
produce scores for each applicant and adapt changes.
Description: Depending on what screening device is currently used, the
amount of deviation between current practices and the development of
scores for each applicant with the desired properties will vary. Organi-
zations need to decide how far they want to go toward developing mea-
sures that yield scores with optimal properties. In most instances, it is
likely to be more effective for organizations to develop measures over
a series of iterations aimed at working toward desired score properties.
Doing so is likely to minimize costs. Some of the properties are more
easily developed after a particular measure has been used for a period
of time.Three general strategies exist for referencing scores to the appli-
cant population: (a) Use an instrument for which norms already exist.
(b) Use supplementary data to convert scores to a form that can be com-
pared across the applicant population (as in using SAT or ACT data for
college or degree program entrants to equate GPA data). (c) Maintain
an archive of scores on the devices you use. Decision makers should
consider the fixed costs of de;eloping these measures and any potential
changes to the variable costs of administering them. These costs should
be compared to the benefits likely to be gained from the development of
attraction outcome data.
KEVIN D. CARLSON ET AL. 489
APPENDIX B
CapabilitiesOffered by AssessingAttraction Outcomes
Capability: Evaluation of alternative attraction practices.
How it adds value: Provides a means for organizations to compare the
effectiveness of alternative recruitment activities. This approach also
allows organizations to evaluate each phase of recruitment so strengths
or weaknesses in attraction can be identified.
Capability: Comparisons of applicants across applicant pools.
How it adds value: Gives decision makers the opportunity to compare
candidates from different applicant pools. Because scores are scaled to
the applicant population rather than to specific applicant pools, scores
for candidates from different applicant pools can be compared directly.
Capability: Evaluating alternative sources of candidates.
How it adds value: Provides organizations with a means of examining
alternative sources of applicants. This can be particularly useful in col-
lege recruitment programs where organizations can objectively evaluate
the potential of the applicant pools from different schools and be able
to make determinations about which sources provide better applicants.
Capability: Evaluating the effectiveness of status maintenance and job
acceptance activities.
How it adds value: Once attraction outcomes have been assessed, the
assessment of status maintenance and job acceptance outcomes simply
involves the recognition of the value lost when top candidates withdraw
from the applicant pool prior to receiving a job offer or refuse to accept
a job offer when one is offered.
Capability: Concurrent evaluation of attraction activities.
How it adds value: Because scores are developed for each applicant,
attraction outcomes can be evaluated on an ongoing basis, even before
organizations complete recruitment processes. This gives decision mak-
ers the opportunity to identify problems and adjust activities to reduce
their negative effects while there is still time to do so.
Capability: Cost-benefit analyses of attraction activities.
How it adds value: Permits organizations to evaluate both the benefits of
alternative attraction, status maintenance, and job acceptance activities,
and their costs. Unambiguous interpretations of the relative effective-
ness are possible.
Capability: Evaluating recruitment improvement opportunities.
How it adds value: Provides organizations the opportunity to compare
how well they are performing recruitment activities against what could
be possible with the applicant populations that exist and provides a
means of estimating the value of those opportunities.