You are on page 1of 18
HUMAN PERFORMANCE, /0(i), 47-63 Copyright © 1997, Lawrence Erlbaum Associates, Inc. The Relative Resistance of the Situational, Patterned Behavior, and Conventional Structured Interviews to Anchoring Effects Heloneida C. Kataoka, Gary P. Latham, and Glen Whyte Faculty of Management University of Toronto Selection interviews are decision-making tools used in organizations to make hiring and promotion decisions. Individuals who conduct such interviews, however, are susceptible to deviations from rationality that may bias interview ratings. This study examined the effect of the anchoring-and-adjustment heuristic on the ratings given to a job candidate by interviewers (n = 190) using 3 different types of interview techniques: the conventional structured interview, the patterned behavior description interview, and the situational interview. The ratings of interviewers who were given a high anchor were significantly higher than the ratings of interviewers who were given a low anchor across all three interview techniques. The effect of the anchoring manipulation, however, was significantly Jess when the situational interview was used. Interviews are widely used in organizations to make selection decisions (Eder, Kacmar, & Ferris, 1989; Harris, 1989). In unstructured interviews, the questions asked of candidates are not predetermined, and are frequently not even job related. The interview often takes the form of a free-flowing conversation. As a result, the unstructured interview usually has both low reliability and validity (Mayfield, 1964; Ulrich & Trumbo, 1965). Cronshaw and Wiesner (1989) concluded that unstructured interviews are so flawed as assessment techniques that no further research effort should be expended on them. Instead, research should focus on different types of structured interviews. Researchers have, in general, followed this Requests for reprints should be sent to Glen Whyte, University of Toronto, Faculty of Management, Rotman Centre for Management, 105 St. George Street, Toronto, Canada MSS 3E6. 48 KATAOKA, LATHAM, WHYTE advice and used various operationalizations of structure (Campion, Pursell, & Brown, 1988; Huffcutt & Arthur, 1994). Regardless of the extent or manner in which interviews are structured, inter- viewers are faced with the task of attempting to evaluate the quality of a candi- date’s responses to interview questions. Considerable research in behavioral decision theory indicates that people use heuristics, or simplifying rules of thumb, when making such judgments regarding the likelihood or value of events under uncertainty (Tversky & Kahneman, 1974). Reliance on judgmental heuristics is a robust form of behavior in decision making that has been demonstrated in areas such as auditing (Johnson, Jamal, & Berryman, 1991; Joyce & Biddle, 1981), negotiations (Bazerman, 1990; Huber & Neale, 1986), utility analysis (Bobko, Shetzer, & Russell, 1991), gambling (Lichtenstein & Slovic, 1971), sales predic- tion (Hogarth, 1980), and predictions of spousal consumer preference (Davis, Hoch, & Rogsdale, 1986). Heuristics simplify complex judgmental tasks, but they may also introduce bias or systematic error when they are inappropriately used (Bazerman, 1990). ANCHORING AND ADJUSTMENT Anchoring and adjustment refers to the heuristic that people unconsciously rely on when estimating the value of something when that value is unknown or uncertain (Tversky & Kahneman, 1974). This is essentially the task that is required to be performed by an interviewer who is rating the quality of an interviewee’ s responses. People typically begin the estimation process by selecting an anchor, from which they then adjust to arrive at a final estimate. Such adjustment, however, tends to be insufficient, with the result that final estimates are biased in the direction of the initial estimate. Most detrimental to the accuracy of final estimates, however, is the tendency for people to choose anchors because they are handy rather than because they are relevant (Bazerman, 1990). Anchoring is a robust phenomenon that has been observed in many domains and tasks, including assessing probabilities (Edwards, Lindman, & Phillips, 1965; Lopes, 1985, 1987; Peterson & DuCharme, 1967; Wright & Anderson, 1989), making predictions based on historical data (Sniezek, 1988), making utility assess- ments (Johnson & Schkade, 1988; Shanteau & Phelps, 1979), exercising clinical judgment (Friedlander & Stockman, 1983; Zuckerman, Koestner, Colella, & Alton, 1984), inferring causal attributions (Quattrone, 1982), estimating confidence ranges (Block & Harper, 1991), making accounting-related judgments (Butler, 1986), goal setting (Mano, 1990), making motivation-related judgments (Cervone & Peake, 1986; Switzer & Sniezek, 1991), belief updating and change (Einhorn & Hogarth, 1985; Hogarth & Einhorn, 1989), evaluating product bundles (Yadov, 1994), and determining listing prices for houses (Northcraft & Neale, 1987). ANCHORING EFFECTS 49 In most anchoring studies, individuals have been asked to provide a numerical estimate regarding the frequency of a class or the value of an object when that frequency or value is unknown or uncertain. For example, Tversky and Kahneman (1974) asked people to estimate the percentage of African countries in the United Nations. Arbitrary estimates were initially provided to participants, who were then asked whether the actual percentage was higher or lower than this number. These initial estimates were determined in the participants’ presence by the spin of a roulette wheel. The median final estimate of participants who received 10% as the initial estimate was 25%. In contrast, the median final estimate of participants who received 65% as the initial estimate was 45%. Thus, even though the initial estimates were clearly irrelevant to the task at hand, people used them as anchors and failed to adjust them sufficiently when arriving at final estimates. This effect occurs even when people are provided with monetary incentives to make accurate estimates (Kahneman, Slovic, & Tversky, 1982). We suggest that interviewers will be susceptible to anchoring effects in part because of evidence from the literature on performance appraisal indicating that evaluations are often biased in the direction of previous evaluations (Smither, Reilly, & Burden, 1988). Past evaluations serve as an anchor for current evaluations that are only partially revised in the face of new evidence (Murphy, Balzer, Lockhart, & Eisenman, 1985). Anchoring effects are distinct from other sources of bias, such as contrast effects. Both Wexley, Sanders, and Yukl (1973) and Latham, Wexley, and Pursell (1975) found that a decision to hire an applicant based on a selection interview is made by contrasting the person’s qualifications with other applicants. Thus a person who was shown to be a 5 (marginally acceptable) on a 9-point scale when evaluated against job requirements was judged to be a 3 or lower when preceded by twohighly qualified applicants. Similarly, the same person was assessed as a7 or higher when preceded by two unqualified applicants. Anchoring effects, however, may occur independent of the presence of other applicants. This study investigated the resistance of the conventional structured interview (CSI), the patterned behavior description interview (PBDI), and the situational interview (SI) to bias due to reliance on the anchoring-and-adjustment heuristic. When rating the quality of an interviewee’s answer to an interview question, interviewers are engaged in the type of task that can be accomplished through anchoring and adjustment. Different interviewers, however, may be using different anchors when judging the quality of a candidate’s response. For example, inter- viewers may employ what they consider to be an excellent answer as an anchor, and then compare the candidate's answer to it. In contrast, interviewers may employ what they consider to be a poor answer as the anchor. The rating of a candidate’s response will likely manifest the effect of anchoring and adjustment regardless of the anchor used. The extent to which an interview technique is structured, however, may minimize anchoring effects. 50 KATAOKA, LATHAM, WHYTE STRUCTURED INTERVIEWS The CSI typically consists of a series of job-related questions that are presented to each candidate (Maurer & Fay, 1988). The questions focus on job responsibilities, duties, job knowledge, and achievements in previous jobs, but they are not neces- sarily based ona formal job analysis. The validity of structured interviews, however, is increased when they are based on a formal job analysis (Wiesner & Cronshaw, 1988). Two types of structured interview techniques that rely on a job analysis are the SI (Latham, 1989) and the PBDI (Janz, 1989). The SI is derived from goal setting theory and is based on the premise that intentions predict behavior (Locke & Latham, 1990). Interview questions using this method are determined from the results of a job analysis using the critical incident technique (Flanagan, 1954). Job candidates are asked what they would do in response to a series of job-related critical incidents. Each incident contains a dilemma that is designed to elicit the candidates’ intentions. A distinguishing feature of the SI is that it provides a behavior-based scoring guide for interviewers to use when evaluating candidates’ responses. The PBDI, in contrast, is based on the premise that the best predictor of future behavior is past behavior. As with the SI, PBDI questions are based on the results of a job analysis using the critical incident technique. Candidates are typically presented with the criterion dimension of interest to the employer, are asked to recall a relevant past incident, and to describe the actions that they took to deal with it. A scoring guide is neither practical nor typically used with a PBDI because of the wide variability in responses that are obtained in descriptions of each candi- date’s personal history, and it has been eschewed by Janz (1989). In contrast to the CSI, the PBDI and the SI focus explicitly on behavior. For example, with the PBDI, candidates are asked to describe a specific situation that occurred in the past, and to describe the actions that they took in response to it (Janz, 1989). With the SI, a specific situation is presented to candidates, who are then asked to describe what action they would take if faced with that situation. Research on structured interviews has focused primarily on issues of reliability and validity (Janz, 1982; Latham & Saari, 1984; Latham, Saari, Pursell, & Campion, 1980; Latham & Skarlicki, 1995; Orpen, 1985; Weekley & Gier, 1987). Relatively little attention, however, has been paid to the issue of freedom from bias (Harris, 1989). This is surprising, given that bias attenuates both reliability and validity (Thorndike, 1949). Four studies have investigated bias in the SI. Maurer and Fay (1988) investigated the ability of interview structure and interviewer training to reduce the effects of errors such as halo, contrast, similar-to-me, and first impressions on rating variabil- ity. Even though no training effect was found, greater agreement was found among the ratings obtained from the SI than from those obtained with the CSI. These findings suggest that the SI is more robust to the effects of bias than the CSI. ANCHORING EFFECTS 51 Lin, Dobbins, and Farh (1992) investigated bias due to similarities between interviewers and interviewees in terms of race and age. Stronger same-race effects were found for the CSI than for the SI. Neither the CSI nor the SI were affected by age similarity. In a study of police officers, Maurer and Lee (1994) found that the SI minimized contrast effects on the accurate assessment of information provided by multiple candidates. Only one study has investigated the resistance of the PBDI to bias. Latham and Skarlicki (1996) examined the effectiveness of the CSI, the PBDI, and the SI in minimizing the similar-to-me bias of francophone managers in Quebec. This bias was not apparent when either the SI or the PBDI were used, but it did occur when the CSI was used. INTERVIEW STRUCTURE Compared to the SI and the PBDI, the CSI lacks structure (e.g., a scoring guide) and a sole focus on behavior. As a result, the CSI has only a relatively loose framework that interviewers can rely on to assess the quality of an interviewee’s responses. In contrast, the structure and behavioral focus of the PBDI and SI may reduce the likelihood that an interviewer will rely on an inappropriate anchor in assessing the responses of an interviewee. In relation to the PBDI and the SI, the CSI therefore may be more susceptible to problems such as those caused by anchoring effects. Inherent in the application of the SI is the use of a scoring guide. The scoring guide increases the degree of structure in the SI relative to both the CSI and the PBDI. Therefore the SI may be less susceptible to an anchoring induced bias than are the other two techniques. Biases are partly responsible for disagreements among decisions made by different interviewers (Maurer & Fay, 1988), because bias affects different interviewers differently. The PBDI, which typically lacks a scoring guide, has modest interrater reliability coefficients in the range of 0.49 (Janz, 1989), whereas the SI has shown much higher coefficients, ranging from 0.76 to 0.96 (Latham, 1989). Thus, the hypotheses tested in this study were as follows: HI: Interviewer ratings of identical responses to interview questions will be significantly more favorable when interviewers are provided with a high rather than a low anchor. H2: Interview type will moderate the extent to which interviewers will be affected by anchoring. More specifically, the PBDI will be more resistant to anchoring effects than the CSI. H3: The SI will be more resistant to anchoring effects than the CSI. H4: The SI will be more resistant to anchoring effects than the PBDI. 52 KATAOKA, LATHAM, WHYTE METHOD Design and Sample To determine the influence of anchoring effects on interviewers using the CSI, PBDI, and SI, a3 x 3 (Anchor x Interview type) between-subjects factorial design was used. Each participant was randomly assigned to either a low, high, or control anchor condition, and either a CSI, PBDI, or SI condition, making both anchor and interview type 3-level between-subject factors. A total of 190 participants (94 women and 96 men) were involved in the study. All participants were graduate students of business administration enrolled in an MBA program in a large North American university. The participants had an average of approximately 6 years of full-time work experience. Their average age was 29 years. MBA students were used as a means of increasing the external validity of the results (Gordon, Slade, & Schmitt, 1986). Procedure Video-taped simulated interviews, one for each interview type, were used to maintain uniformity in both candidate's answers and behavior during the interview (Ilgen, 1986). All participants in each interview condition watched the same videotape showing a candidate being interviewed for a position as a teller in a bank. This job was chosen because the job requirements are straightforward (e.g., prioritize requests, handle complaints). Therefore, prior familiarity of the interview- ers with the job was not necessary. To maintain consistency across interview type, the questions used in each interview format were written to address the same job dimension. To check for consistency in this regard across interview type, organizational behavior doctoral students (n = 5) were given the scripts for each of the three interviews and asked whether they agreed that the questions tapped the same underlying dimensions. Answers were given using a 5-point Likert-type scale ranging from 1 (strongly disagree) to 5 (strongly agree). There was high agreement that the questions represented similar dimensions across interview type (M = 4.4, SD = 0.5). Candidate responses to interview questions were also developed. Average answers, as opposed to either excellent or poor answers, were written for each of the interview questions. Because the purpose of this research was to investigate the effects of anchoring on interviewer ratings of candidate responses, this step would potentially allow both the high and low anchor manipulations to demonstrate an impact toward both ends of the rating scales. The procedure followed to develop candidate responses to interviewer questions was similar to the one used by Maurer and Fay (1988). The answers were first developed for the SI, and were written to be comparable to what would be described ANCHORING EFFECTS 53 in the scoring guide as an average response. Care was taken so that the candidate responses did not correspond verbatim to the behaviors referred to in the guide, because this would rarely if ever happen in practice. After developing the answers to the SI, comparable answers were then generated for the PBDI and the CSI. Examples of comparable SI, PBDI, and CSI questions and their respective answers are given here: ‘SI question: A client has complained to your boss about your recent presentation. The client said that you were not able to answer basic questions. Your boss calls you to discuss the problem. What would you say? Interviewer Scoring guide: (1) That surprises me. I’m sorry—it won’t happen again. (3) Yes, I know—I didn’t feel it went well either. What do you suggest I do to improve? (5)Yes, I realized it didn’t go well. I'd like to call the client and follow up. But first, I'd like any suggestions you may have. Answer: I would admit that the presentation went poorly, and I would try to talk to others to get some advice that would help me to make a better presentation the next time. PBDI question: Tell me about a time when a client complained to your boss that you were not prepared for a presentation that you made to the client? What were the circumstances? What did you do? What was the outcome? Who can I call to verify this information? Answer: My boss asked me to make a presentation to a client about a subject that I was not very familiar with. Unfortunately, during the presenta- tion the client asked me some questions that I was unable to properly answer. As a result, after the meeting the client complained to my boss about my performance. My boss then called me into his office and told me about the client’s complaints. I admitted that the presentation went poorly, and I said I was going to try to talk to others to get some advice that would help me to make a better presentation the next time. CSI question: How do you respond when your boss tells you that a client has complained that you were unprepared for a presentation that you made to the client? Answer: I admit that the presentation went poorly, and say that I will try to talk to others to get some advice that can help me to make better presentations. One video tape for each interview type was recorded using the same setting and the same actor in the role of the job candidate to hold differences across conditions constant. Only the candidate could be seen on tape. The interviewer could be heard but not seen. The participants were asked to observe the videotape and evaluate the candidate’s answers. This procedure was modeled after one that is used by the president of a bank to give final approval to the selection of tellers. 54° KATAOKA, LATHAM, WHYTE To the greatest extent possible, uniformity in the answers was maintained across each of the interview formats. Uniformity of answers across interview conditions was confirmed by a one-way analysis of variance (ANOVA) on the mean ratings of interviewee responses in the control conditions of each interview technique. No statistically significant difference was obtained in interview ratings regardless of the interview technique used, F(2, 61) = 0.48, p < .62. Each participant received a booklet of experimental materials. The term booklet denotes each of the nine unique sets of experimental materials used in this study. Each booklet contained a set of instructions; a questionnaire tailored to either the CSI, PBDI, or SI formats; rating scales for each interview question ranging from 1 (poor) to 5 (good); the anchor manipulation (low, control, and high); and manipu- lation check questions. Anchor was manipulated in the following way. A variation of this technique has, ina different context, successfully induced anchoring effects (e.g., Joyce & Biddle, 1981). High-anchor condition. Prior to rating each answer of the candidate to the interview questions, the participants were asked: Does the applicant's answer rate a score of 5? (5 = good) (a) Yes, the applicant’s answer rates a score of 5. (b) No, the applicant’s answer rates a score of less than 5. If you chose (b), please rate the applicant’s answer on a scale ranging from | (poor) to 5 (good). Low-anchor condition. _ Prior to rating each of the candidate's answers to the interview questions, participants in this condition were asked: Does the applicant's answer rate a score of 1? (1 = poor) (a) Yes, the applicant’s answer rates a score of 1. (b) No, the applicant’s answer rates a score of greater than |. If you chose (b), please rate the applicant’s answer on a scale ranging from | (poor) to 5 (good). Control condition. Participants were simply asked to rate the candidate’s answers to each of the interview questions according to a rating scale ranging from 1 (poor) to 5 (good). RESULTS Dependent Variable ‘The overall rating received by the candidate was calculated as the sum of the scores assigned to each of the 10 interview questions. Thus, the candidate’s total score ANCHORING EFFECTS 55 could range from 10 to 50. Mean scores for each interview type and anchor condition are shown in Table 1. Manipulation Checks After watching the interview and rating the candidate’s answers, participants completed single-item scales designed to investigate their perceptions of the candidate’s behavior during the interview. One-way ANOVAs on responses to the scales revealed that the candidate’s behavior was perceived consistently across conditions. That is, there was no significant difference in how participants across interview type viewed the candidate in terms of enthusiasm, F(2, 187) = 0.70, p< -50; friendliness, F(2, 187) = 2.06, p< .13; confidence, F(2, 187) = 0.07, p < 0.93; concern, F(2, 187) = 2.33, p < 0.11; attention, F(2, 187) = 1.58, p < 0.21; and sincerity, F(2, 187) = 1.43, p <.24. Statistical Analyses Planned comparisons were used to test the four hypotheses of this study. This method focuses on smaller designs of interest extracted from the original factorial design (Keppel & Zedeck, 1989). Rather than focusing on an overall omnibus F test, this method was chosen because it allows the researcher to focus on meaningful components of the design and directly test the specific hypotheses of the study. It also reduces the possibility of committing a Type I error (Keppel, 1991). To test the first hypothesis, analyses of simple effects of the anchoring manipu- lation on each interview type were conducted. One-way ANOVAs revealed that TABLE 1 Means and Standard Deviations for Overall Ratings of Candidate Responses According to Interview Type Anchor Low Control High Interview Type M SD n M SD 7 M SD n CsI 29.21 5.3024 32.95 6.95 19 38.10 5.69 21 PBDI 3162 7.42 21 33.52 622 21 39.10 7.06 20 SI 30.62 337 21 3182 4.16 22 33.75 270 21 Note, CSI = conventional structured interview; PBDI = patterned behavior description in- terview; SI = situational interview. 56 KATAOKA, LATHAM, WHYTE, for all three interview types, anchor had a significant effect on interviewer ratings: CSI, F(2, 61) = 12.50, p < .01; PBDI, F(2, 59) = 6.44, p < .01; SI, F(2, 61) = 4.38, p< .02. Therefore, the first hypothesis was supported. The effect size for the SI, however, is medium (w’ > 0.06; Cohen, 1977), whereas the effect sizes for the PBDI and CSI are large (w* > 0.15). w” for the CSI, PBDI, and SI were 0.27, 0.15, and 0.10, respectively. Figure | illustrates anchoring effects on interview ratings for the three interview types investigated. To further investigate the relative susceptibility of each interview type to anchoring, F tests were conducted on the differences between the variance of ratings pooled according to interview type. The variance of interviewer ratings was significantly less for the S] than for either the CSI, F(63, 63) = 3.62, p< .01, or the PBDI, F(63, 61 = 4.22, p< .01. The difference in the variance of interviewer ratings between the PBDI and the CSI, however, was not significant, F(61, 63) = 1.16, p < .10. These results suggest that ratings in the SI condition were less affected by anchoring than ratings in the other interview conditions. The higher degree of agreement among raters in the SI condition than in the CSI condition also replicates the results obtained by Maurer and Fay (1988). The second, third, and fourth hypotheses of this study suggested that an increase in interview structure would decrease the effect of anchoring. Three 2 x 2 (Interview Type x Anchor) ANOVAs were conducted. The first tested whether anchoring 425 40.0 MEAN RATINGS & OS Sa @ 8 a 2 8 S 275 LOW CONTROL, HIGH ANCHOR FIGURE 1 Mean overall ratings by interviewers in low, control, and high anchor conditions. -sadka matasainy uaamiaq sioayje Suyroyoue Jo suosuedwog Zz FYNOI YOHONY HOHONY HOHONY 7 a oy 8 SONLLVY NV3W a ‘SONILVY NV3N 57 58 KATAOKA, LATHAM, WHYTE. effects were more pronounced in the ratings of interviewers using the CSI compared with those using the PBDI for both high and low anchor conditions. An answer to this question can be determined with regard to the existence of a significant Interview Type x Anchor interaction effect (Keppel, 1991). The interaction test shows whether the simple effects of anchoring may be considered the same (no interaction) or different (interaction). The results revealed that the interaction term was not significant, thus indicating no significant difference between the PBDI and the CSI in terms of resistance to the effects of anchoring, F(1, 82) = 0.26, p< .61. Hypothesis 2 was therefore rejected. The second 2 x 2 (Interview Type x Anchor) ANOVA compared the ratings obtained in the SI condition with those obtained in the CSI condition for both high and low anchor conditions. The results in this case revealed a significant interaction effect, F(1, 83) = 8.91, p < .01. These results indicate that although the SI is still susceptible to anchoring effects, it was more resistant than was the CSI. Hypothesis 3 was thus supported. The third 2 x 2 (Interview Type x Anchor) ANOVA compared the ratings obtained in the SI condition with those of the PBDI. The results revealed a marginally significant interaction, F(1, 79) = 3.19, p < 0.07, suggesting that the ST is more resistant to anchoring effects than the PBDI (Hypothesis 4). The results of the analyses of interactions between the different types of interviews are shown in Figure 2. DISCUSSION This study showed that interviewers, when rating the responses of job candidates to interview questions, are susceptible to anchoring effects when employing struc- tured interview techniques. Candidate ratings were biased in the direction of the anchor provided to the interviewer, regardless of the interview technique that was used. This bias, however. was significantly less when the SI was used as compared to the CSI and the PBDI. The relative resistance of the SI to anchoring effects as compared with the PBDI is likely attributable to the use of a scoring guide. Scoring guides may tend to reduce anchoring effects because the behaviors appearing on the guides are themselves “anchors” designed to assist raters in making judgments. These referents may serve to decrease the likelihood of inappropriate anchoring. That anchoring effects from irrelevant sources were not entirely eliminated by the use of a scoring guide, however, is testimony to the strength of what is clearly a robust phenomenon (Bazerman, 1990). This study contributes to knowledge in three ways. First, it links the selection interview literature with the literature on individual decision making in a way that increases our understanding of the selection interview, while adding to our under- ANCHORING EFFECTS 59 standing of the boundary conditions of the process of anchoring and adjustment. Although some work has been done linking the individual decision-making litera- ture with human resource management in general (e.g., Northcraft, Neale, & Huber, 1989), and with selection decisions specifically (e.g., Huber, Northcraft, & Neale, 1990), the effects of cognitive biases on different types of structured selection interviews have not been previously investigated. Second, the results of this study suggest a way to debias human judgment. There has been considerable research demonstrating the existence of bias in decision making, but relatively less effort devoted to examining how to reduce it. Also, the results of research on debiasing judgment has not been overly encouraging. This study, however, represents some good news in the sense that it indicates how one source of bias, the anchoring heuristic, can be reduced by the use of a scoring guide. Third, the study has practical implications for managerial behavior. It demon- strates that a job candidate may be judged more favorably than justified because the interviewer is using a high anchor when rating candidate responses to interview questions. The result in such a case could be an inappropriate hiring decision. Similarly, a decision not to hire a suitable candidate could result if the interviewer uses a low anchor to assess answers to questions asked during the interview process. In either case, the result is negative for the organization. Although one could question the extent to which high motivation to make accurate decisions affects the anchoring process, evidence suggests that the intro- duction of substantial incentives to make accurate choices does not eliminate systematic errors of judgment (e.g., Grether & Plott, 1979; Slovic & Lichtenstein, 1983). Incentives narrow attention and increase deliberation (Tversky & Kahne- man, 1986). Because people rely unconsciously on anchors to make estimates under uncertainty (Tversky & Kahneman, 1974), itis not atall clear how incentives would reduce anchoring effects. The external validity of the present findings is therefore an issue for further research. Another potential limitation of this study is the fact that each participant saw only one videotape of a single candidate. Although the participants had considerable years of work experience, a stronger test of the hypotheses would involve a field study that included different candidates for different jobs. Future research should examine the extent to which anchoring effects influence the outcome of different structured interview techniques when interviews are conducted and decisions are made by a panel. Examination of the extent to which group decision making may amplify or reduce the effect of heuristics on interviewer ratings should prove to be interesting and worthwhile (e.g., Argote, Seabright, & Dyer, 1986; Whyte, 1993). The extent to which structured selection interviews are free from bias has received little research attention. To further explore this question, this study proposed that even structured selection interviews can be understood as fertile 60 KATAOKA, LATHAM, WHYTE ground for the occurrence of the cognitive biases that characterize individual decision making. These biases potentially reduce the reliability and validity of the selection interview. Through an understanding of the causes and consequences of these biases, the employment interview as a selection device can be improved. ACKNOWLEDGMENT This research was supported by a SSHRC grant. REFERENCES Argote, L., Seabright, M. A., & Dyer, L. (1986). Individual versus group use of base-rate and individuating information. Organizational Behavior and Human Decision Processes, 38, 65-15. Bazerman, M. H. (1990). Judgment in managerial decision making. New York: Wiley. Block, R. A., & Harper, D. R. (1991). Overconfidence in estimation: Testing the anchoring and adjustment hypothesis. Organizational Behavior and Human Decision Processes, 49, 188-207. Bobko, P., Shetzer, L., & Russell, C. (1991), Estimating the standard deviation of professors’ worth: ‘The effect of frame and presentation in utility analysis. Journal of Occupational Psychology, 64, 179-188. Butler, S. A. (1986). Anchoring in the judgmental evaluation of audit samples. Accounting Review, 61, 101-111, ‘Campion, M. A., Pursell, E. D., & Brown, B. K. (1988). Structured interview: Raising the psychometric properties of the employment interview. Personnel Psychology, 41, 25~42. Cervone, D., & Peake, P. K. (1986). Anchoring, efficacy, and action: The influence of judgmental heuristics on self-efficacy judgments and behavior. Journal of Personality and Social Psychology, 50, 492-501 Cohen, J. (1977). Statistical power analysis for the behavioral sciences. New York: Academic. Cronshaw, S. F., & Wiesner, W. H. (1989). The validity of the employment interview: Models for research and practice. In R. W. Eder & F. R. Ferris (Bds.), The emaployment interview: Theory, research, and practice (pp. 269-281). Beverly Hills: Sage. Davis, H. L., Hoch, S. J., & Rogsdale, B. K. (1986). An anchoring and adjustment model of spousal prediction. Journal of Consumer Research, 13, 25-37 Eder, R. W., Kacmar, K. M., & Ferris, G. R. (1989). Employment interview research: History and synthesis. In R. W. Eder & F. R. Ferris (Eds.), The employment interview: Theory, research, and Practice (pp. 17-31). Beverly Hills: Sage. Edwards, W., Lindman, H., & Phillips, L. D. (1965). Emerging technologies for making decisions. In T.M, Newcomb (Ed.), New directions in psychology I! (pp. 261-325). New York: Holt, Rinehart ‘& Winston. Einhom, H. J., & Hogarth, R. M. (1985). Ambiguity and uncertainty in probabilistic inference. Psychological Review, 92, 465-461 Flanagan, J. C. (1954). The critical incident technique. Psychological Bulletin, 51, 327-358. Friedlander, M. L., & Stockman, S. J. (1983). Anchoring and publicity effects in clinical judgment. Journal of Clinical Psychology, 39, 637-643. Gordon, M. E., Slade, L. A., & Schmitt, N. (1986). The science of the sophomore revisited: From conjecture to empiricism. Academy of Management Review, 11, 191-207. ANCHORING EFFECTS 61 Grether, D. M., & Plott, C. R. (1979). Economic theory of choice and the preference reversal phenomenon. American Economic Review, 69, 623-638. Harris, M. M. (1989). Reconsidering the employment interview: A review of recent literature and suggestions for future research. Personnel Psychology, 42, 691-726. Hogarth, R. M. (1980). Judgment and choice. New York: Wiley. Hogarth, R. M., & Einhorn, H. J. (1989). Order effects in belief updating: The belief adjustment model. ‘Working paper, Center for Decision Research, University of Chicago. Huber, V.L., & Neale, M. A. (1986). Effects of cognitive heuristics and goals on negotiator performance and subsequent goal setting. Organization Behavior and Human Decision Processes, 38, 342-365. Huber, V. L., Northcraft, G. B., & Neale, M. A. (1990). Effects of design strategy and number of openings on employment selection decisions. Organizational Behavior and Human Decision Processes, 45, 276-284. Hoffoutt, A. 1, & Arthur, W. (1994), Hunter and Hunter (1984) revisited: Interview validity for entry-level jobs. Journal of Applied Psychology, 79, 184-190. gen, D. R. (1986). Laboratory research: A question of when, not if. In E. A. Locke (Bd.), Generalizing from laboratory to field settings (pp. 257-267). Lexington, MA: Lexington. Janz, T. (1982). Initial comparisons of patterned behavior description interview versus unstructured interviews. Journal of Applied Psychology, 67, 571-580. Janz, T, (1989). The patterned behavior description interview: The best prophet of the future is the past. In R. W. Eder & F, R, Ferris (Eds.), The employment interview: Theory, research, and practice. Beverly Hills: Sage. Johnson, P. E., Jamal, K., & Berryman, M. G. (1991). Effects of framing on auditor decisions. Organization Behavior and Human Decision Processes, 50, 715-105. Johnson, E. J., & Schkade, D. A. (1988). Bias in utility assessments: Further evidence and explanations. Management Science, 35, 406-424. Joyce, B. J., & Biddle, G. C. (1981). Anchoring and adjustment in probabilistic inferences in auditing. Journal of Accounting Research, 19, 120-145. Kahneman, D., Slovic, P., & Tversky, A. (1982). Judgment under uncertainty: Heuristics and biases. New York: Cambridge University Press. Keppel, G. (1991). Design and analysis: A researcher's handbook. Englewood Cliffs, NJ: Prentice-Hall. Keppel, G., & Zedeck, S. (1989). Data analysis for research designs. New York: Freeman. Latham, G. P. (1989). The reliability, validity, and practicality of the situational interview. In R. W. Eder & F. R. Ferris (Eds.), The employment interview: Theory, research, and practice (pp. 169-182). Beverly Hills: Sage. Latham, G. P., & Saari, L. M. (1984), Do people do what they say? Further studies on the situational interview. Journal of Applied Psychology, 69, 569-573. Latham, G. P., Saati, L. M., Pursell, D., & Campion, M. A. (1980). The situational interview. Journal of Applied Psychology, 65, 422-427. Latham, G. P., & Skarlicki, D. (1995). Criterion-related validity of the situational and patterned behavior description interviews with organizational citizenship behavior. Human Performance, 8, 67-80. Latham, G. P., & Skarlicki, D. (1996). The effectiveness of the situational, pattened behavior, and conventional structured interviews in minimizing in-group favouritism of Canadian francophone managers. Applied Psychology: An International Review, 45, 177-184. Latham, G. P., Wexley, K. N., & Pursell, E. D. (1975). Training managers to minimize rating errors in the observation of behavior. Journal of Applied Psychology, 60, 530-555. Lichtenstein, S., & Slovic, P. (1971). Reversal of preference between bids and choices in gambling decisions. Journal of Experimental Psychology, 89, 46-55. Lin, T. R., Dobbins, G. H., & Farh, J. L. (1992). A field study of Race and Age similarity on interview ratings in conventional and situational interviews. Journal of Applied Psychology, 77, 363-371 62 KATAOKA, LATHAM, WHYTE. Locke, E. A., & Latham, G. P. (1990).A theory of goal setting and task performance. Englewood Cliffs, NJ: Prentice-Hall. Lopes, L. L. (1985). Averaging rules and adjustment processes in Bayesian inference. Bulletin of the Psychonomic Society, 23, 09-512. Lopes, L. L. (1987), Procedural debiasing. Acta Psychologica, 64, 176-185. Mano, H. (1990). Anticipated deadline penalties: Bffects on goal levels and task performance. In R. M. Hogarth (E4.), Insights in decision making (pp. 173-176). Chicago: University of Chicago Press. Maurer, S. D., & Fay, C. (1988). Bffects of situational interviews, conventional structured interviews, and training on interview rating agreement: An experimental analysis. Personnel Psychology, 41, 329-344, Maurer, S. D., & Lee, T. W. (1994), Toward a resolution of contrast error in the employment interview: AA test of the situational interview. In D. P. Moore (Ed.), Academy of Management Best Papers Proceedings 1994 (pp. 132-136). Madison, WI: Omnipress. Mayfield, E. C. (1964). The selection interview: A re-evaluation of published research. Personnel Psychology, 17, 239-260. Murphy, K.R., Balzer, W. K., Lockhart, M. C., & Eisenman, E. J. (1985). Effects of previous performance on evaluations of present performance. Journal of Applied Psychology, 70, 72-84. Northcraft, G. B., & Neale, M. A. (1987). Amateurs, experts, and real estate: An anchoring-and-adjust- ‘ment perspective on property pricing decisions. Organizational Behavior and Human Decision Processes, 39, 84-91, Northeraft, G. B., Neale, M. A., & Huber, V. L. (1989). The effects of cognitive biases and social influence on human resource management decisions. In G. Ferris & K. Rowland (Eds.), Research in personnel and human resource management. Greenwich, CT: JAL Orpen, C. (1985). Patterned behavior description interviews versus unstructured interviews: A com- parative validity study. Journal of Applied Psychology, 70, 774-716. Peterson, C. R., & DuCharme, W. M. (1967). A primacy effect in subjective probability revision. Journal of Experimental Psychology, 73, 61-65. Quattrone, G. A. (1982). Over attribution and unit formation: When behavior engulfs the person. Journal of Personality and Sacial Psychology, 42, 493-607. Shanteau, J., & Phelps, R. H. (1979). Things just don't add up: The case for subjective additivity of utility (Psychology Rep. No. 79-8, Applied Psychology Series). Manhattan, KA: Kansas State University. Slovic, P., & Lichtenstein, S. (1983). Preference reversals: A broader perspective. American Economic Review, 73, 596-605. Smither, J. W.,Reilly, R. R., & Burden, R. (1988). Effect of previous performance information on ratings of present performance: Contrast versus assimilation revisited. Journal of Applied Psychology, 73, 487-496, Sniezek, J. A. (1988). Prediction with single event versus aggregate data. Organizational Behavior und Human Decision Processes, 41, 196-210. Switzer, F. S., & Sniezek, J. A. (1991). Judgment processes in motivation: Anchoring and adjustment effects on judgment and behavior. Organizational Behavior and Human Decision Processes, 49, 208-229. ‘Thomdike, R. L. (1949). Personnel selection, New York: Wiley. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 85, 1124-1131. Tversky, A., & Kahneman, D. (1986). Relational choice and the farming of decisions. Journal of Business, 59, S251-S278. Ulrich, L., & Trumbo, D. (1965). The selection interview since 1949. Psychological Bulletin, 63, 100-116, ANCHORING EFFECTS 63 Weekley, J. A., & Gier, J. A. (1987). Reliability and validity of the situational interview for a sales position. Journal of Applied Psychology, 72, 484-487. Wexley, K. N., Sanders, R. E., & Yukl, G. A. (1973). Training interviewers to eliminate contrast effects in employment interviews. Journal of Applied Psychology, 57, 233-236. Whyte, G. (1993). Escalating commitment in individual and group decision making: A prospect theary approach. Organizational Behavior and Human Decision Processes, 54, 430-455. Wiesner, W. H., & Cronshaw, S. F. (1988). A meta-analytic investigation of the impact of interview format and degree of structure on the validity of employment interview. Journal of Occupational Psychology, 61, 275-290. Wright, W. F., & Anderson, U. (1989). Effects of situation familiarity and financial incentives on the use of the anchoring and adjustment heuristic for probability assessment. Organizational Behavior and Human Decision Processes, 44, 68-82. Yadov, M. S. (1994). How buyers evaluate product bundles: A model of anchoring and adjustment. Journal of Consumer Research, 21, 342-353. Zuckerman, M., Koestner, R., Colella, M. J., & Alton, A. ©. (1984). Anchoring in the detection of deception and leakage. Journal of Personality and Social Psychology, 47, 301-311 Copyright © 2002 EBSCO Publishing

You might also like