You are on page 1of 12

Combining Data on Criticality and Frequency in Developing Test Plans for Licensure and Certification Examinations Author(s): Michael

T. Kane, Carole Kingsbury, Dean Colton and Carmen Estes Source: Journal of Educational Measurement, Vol. 26, No. 1 (Spring, 1989), pp. 17-27 Published by: National Council on Measurement in Education Stable URL: http://www.jstor.org/stable/1434620 . Accessed: 25/01/2014 18:15
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp

.
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.

National Council on Measurement in Education is collaborating with JSTOR to digitize, preserve and extend access to Journal of Educational Measurement.

http://www.jstor.org

This content downloaded from 131.94.16.10 on Sat, 25 Jan 2014 18:15:47 PM All use subject to JSTOR Terms and Conditions

Journal of Educational Measurement Spring 1989, Vol. 26, No. 1, pp. 17-27

Data on CriticalityandFrequency in Combining Plans for Licensure Test and Developing Certification Examinations
Michael T. Kane
American College Testing Program

Carole Kingsbury
National Leaguefor Nursing

Dean Colton
American College Testing Program

CarmenEstes
American College Testing Program Job analysis is a critical component in evaluating the validity of many high-stakes testing programs, particularly those usedfor licensure or certification. The ratings of criticality and frequency of various activities that are derived from such job analyses can be combined in a number of ways. This paper develops a multiplicative model as a natural and effective way to combine ratings offrequency and criticality in order to obtain estimates of the relative importance of different activities for practice. An example of the model's use is presented. The multiplicative model incorporates adjustments to ensure that the effective weights offrequency and criticality are appropriate.

There are several types of high-stakes, large-scale testing programs that are designed to assess readiness to engage in some type of work. The examinations used in making decisions about professional and occupational licensure and certification are particularly prominent examples of this type of examination. These tests have a direct impact on the work opportunities of the many candidates for licensure or certification, and a less direct, but pervasive, influence on the quality and availability of a variety of important services. The content of these examinations also tends to exert a strong influence on the content of educational programsat both the graduate and undergraduatelevels in programs that prepare students for these professions and occupations. For example, one of the arguments for introducing certification examinations for teachers in many states is to upgrade the quality of teacher preparation programs. Some types of employment tests (i.e., those used to evaluate readiness to perform a particular job rather than readiness for training) are also obvious examples of this kind of test. Licensure and certification tests are intended to provide assurance that passing candidates have the knowledge and skills necessary to perform safely and effectively in some profession or occupation. A major concern in developing 17

This content downloaded from 131.94.16.10 on Sat, 25 Jan 2014 18:15:47 PM All use subject to JSTOR Terms and Conditions

Kane,Kingsbury, Colton,andEstes licensure and certification examinations and in evaluating their quality is the validity of the proposedinterpretation in terms of readiness for the type of work for which a license or certification is being awarded. A fairly detailed specification of the activities performedin the work area (e.g., the results of an empirical job analysis) is generally an integral part of any effort to develop validity data for such tests (Kane, 1982; Shimberg, 1981). The importance of empirical job analyses in validating these examinations is reflected in several sets of professional and legal standards/guidelines. For example, the Standards for Educational and Psychological Testing (American Psychological Association, American Educational Research Association, & National Council on Measurement in Education, 1985) suggest that in validating professional and occupational licensure and certification examinations, primary reliance usually must be placed on content-related evidence, and that an argument based on content-related evidence should be supported by a job analysis. Under a content-related strategy, the detailed definition of the area of activity can be used directly in developing test specifications for the tests. Alternatively, the definition of the area of activity might be used to develop a criterion of performancein the area as a basis for examining the predictivevalidity of the test scores. In either case, it is necessary to translate information about the area of activity into specifications for a measurement procedure of some kind. To the extent that job analysis data are used to inform curriculum decisions, an appropriateweighting of curriculum content also will depend on an appropriate weighting of different types of activities in the job analysis. Although the importance of job analysis in examining validity issues is widely recognized, the methods for developing detailed descriptions of work activity and for translating such descriptionsinto test specificationsare not well developed. In this paper we examine some of the issues involved in developing specifications for licensure and certification tests from information about the frequency and criticality of specific activities, and propose a method for transforming information about frequency and criticality of activities into test specifications. The discussion is in terms of licensure examinations, but the general approach also would apply to certification tests and to some kinds of employment tests. The next section provides a brief discussion of the advantages and disadvantages of basic additive and multiplicative models for combining frequency and criticality. The third section develops a more sophisticated multiplicative model, which makes it possible to control the relative impact of frequency and criticality on the final weights. Controlling the relative impact of frequency and criticality is an important issue, because questions about the frequency of activities tend to generate much more variability in responsesthan questions about criticality, and, therefore, in the absence of appropriate adjustments, frequency tends to dominate criticality in determining the weights assigned to activities. The fourth section presents an example based on a job analysis of the practice patterns of entry-level registered nurses, and illustrates the usefulness of controlling the impact of frequency and criticality in the multiplicative model. For motivational reasons, it may be advisable to examine the example in the 18

This content downloaded from 131.94.16.10 on Sat, 25 Jan 2014 18:15:47 PM All use subject to JSTOR Terms and Conditions

andFrequency Criticality fourth section before reading the more technical development in the third section. Models for CombiningData on Frequencyand Criticality One of the most common approaches to job analysis for licensure and certification examinations involves the use of activity inventories, or task inventories (Gael, 1983; McCormick, 1979). Basically, the use of an activity inventory involves three steps. First, activity statements are developed and verified as reflecting potentially important parts of practice. The list of activities should be as comprehensiveas possible. Second, a questionnaire based on the list of activities is developed and administered to job incumbents and/or supervisors. In general, the questionnaire asks at least two questions about each activity: how often the activity occurs (its frequency), and how much difference it makes in terms of client outcomes if the activity is performedwell or badly (its criticality). Third, data providedby job incumbents are analyzed to weight activities in terms of their overall importance for practice. The relative importance of any activity in practice will depend on the frequency of the activity (how often it is performed) and the criticality of the activity (the difference that it makes in terms of client outcomes). The results of the job analysis can be summarized in terms of the average frequency of occurrence of each activity over respondentsand the average rating of criticality over respondents. The central task is then to combine average frequency and average criticality in order to get an overall index of the importance of the activity. We examine several models for combining frequency and criticality in developing a test plan. An additive model of the form Ii = Ci + Fi, (1)

where Ii represents the importance of the ith activity, F, represents the frequency of the ith activity, and Ci represents the criticality of the ith activity, is the simplest type of model to use. However, the additive model yields an index of importance that is hard to interpret in a coherent way. The scales for frequency and criticality are different in their interpretation. Adding the number of times an activity occurs to its perceived consequences results in an index that has no clear interpretation. The main advantage of the additive model in Equation 1 is that it is simple. A multiplicative model is more statistically complicated (as we shall see) but makes more sense. We can think of the criticality of an activity as a measure of the consequences that may result from the activity, on the average, each time the activity is performed. That is, criticality can be viewed as importance per occurrenceof the activity. The overall importance of the activity for practice then could be estimated by summing criticality over all occurrences or, more simply, by multiplying the criticality by the frequency I, = C,F,. (2) The multiplicative model in Equation 2 is a particularly natural way to combine 19

This content downloaded from 131.94.16.10 on Sat, 25 Jan 2014 18:15:47 PM All use subject to JSTOR Terms and Conditions

Kane,Kingsbury, Colton,andEstes data on criticality and frequency in assigning different levels of importance to different activities. Nominal and Effective Activity Weights One problem that arises in using any model to combine frequency and criticality is that the effective contributions of frequency and criticality to importance may be quite different from the nominal, or apparent, contributions of these two variables (see Jarjoura & Brennan, 1982; Wang & Stanley, 1970). That is, although Fi and Ci play parallel roles in Equation 2, the impact of these two variables on the importance assigned to different activities is determined by the statistical properties of the two variables. In most job analysis studies, the statistical properties of the frequency scale and the criticality scale are likely to be quite different, and therefore the effective contributions of Fi and Ci in Equation 2 in general would not be equal. The relative emphasis that should be given to frequency and criticality in determining importance is a matter of judgment. However, for licensure that is intended to protect the public from harm or unnecessary risk, criticality would seem to be of at least as much concern as frequency. As Rakel (1979) has suggested, The temptation to achievecontentvalidityin examinations by matchingtest items to the frequencyof problemsencountered in practice could also be Thereis a justifiable needto test moreheavilyon problems counter-productive. if missed" that havea highmorbidity andfall intothe "uncommon but harmful in Becauseof theirseriousnature,theydeserve greaterrepresentation category. an examination thanpracticesurveys indicate.(p. 93) Activities that are critical in the sense that their omission or inadequate performance would pose substantial risk to clients are directly related to the purposes of licensure, even if they have relatively low frequency. By contrast, activities that are performed frequently but have very low criticality would be less important for the protection of the public than their frequency might suggest. Although, as noted earlier, the contributions to be made by criticality and frequency are a matter of judgment rather than an empirical question, it seems clear that the relative contributions of these two variables should not be determined by the propertiesof data collection procedures. In examining the effective contributions of the two variables in Equation 2, it is convenient to convert Equation 2 into a linear equation by taking the natural logarithms of both sides of the equation: lnI, = lnCi + lnFi. (3) The effective weights of frequency and criticality then can be found by partitioning the variance in lnIi into two parts: var (InIi) = cov (InIi, Inl,)
= cov (InCi + InFi, Inl,)

= cov (lnC,, Inli) + cov (lnFi, lnIi). 20

(4)

This content downloaded from 131.94.16.10 on Sat, 25 Jan 2014 18:15:47 PM All use subject to JSTOR Terms and Conditions

andFrequency Criticality The first term in Equation 4 can be interpreted as the effective contribution of criticality in Equation 2 and can be expanded as cov (InCi, Inl) = cov (InCi, InCi + InF,) = var (nC,) + cov (InCi, InFi).

(5)

Similarly, the effective contribution of the frequency in Equation 2 is found by expanding the second term in Equation 4: cov (InFi, Ini,) = cov (InFi, InCi + InF,) = var (InFi) + cov (lnCi, InF,).

(6)

We can alter the relative contributionsof frequency and criticality by transforming one or both of these two variables. Because it is the relative contributionsthat are significant rather than the absolute values of the variables, it is necessary to transform only one of the two variables. Of the two variables, it seems natural to transform criticality rather than frequency. The frequency scale has a natural interpretation as a count of the number of times the activity is performed, and most transformationsof this scale would interfere with this interpretation. The criticality scale is essentially ordinal, and any transformationthat did not change the ordering of activities on the criticality scale would not interfere with its interpretability. Given that we are using a multiplicative model, an exponential transformation of the criticality scale of the form ci = Ca (7)

is convenient. Using the transformed criticality, we can determine the effective contributions of criticality and frequency in the new model, I; = CIF, = Ca Fi, (8) as we did earlier for Equation 2. Taking the logarithm of Equation 8, we have
InlI = In(C,Fi)

= a InCi + InFi. The variance of InI then can be expanded as var (Inli) = cov (Inlj, lnlM) = cov (a InCi + InFi, Inli) = a cov (InC,, InI) + cov (InF,, InI ).

(9)

(10)

The first term on the right side of Equation 10 represents the contribution of the criticality variable (transformed) to estimates of importance, and is given by a cov (InC,, InIl) = a cov (InCi, a InCi + InFi) = a2 var (InC,) + a cov (InC,, InFi).

(11)

The second term on the right side of Equation 10 represents the contribution of 21

This content downloaded from 131.94.16.10 on Sat, 25 Jan 2014 18:15:47 PM All use subject to JSTOR Terms and Conditions

Kane,Kingsbury, Colton,andEstes the frequency variable to the estimates of importance, and is given by cov (InF,, InI') = cov (InF,, a InCi + InFi) = var (InFi) + a cov (InCi, InF,). (12)

If we wish the effective weights of criticality and frequency to be equal in determining overall importance, we can set Equation 12 equal to Equation 11, and solve for the appropriatevalue of a: or a2 var (InCi) + a cov (lnCi, InFi) = var (InFi) + a cov (InCi, InFi), a2 var (InCi) = var (InF,) a = [var (lnFi)/var (InCi)]12. (13)

That is, the contributions of criticality and frequency (relative to the total variance of InI;) can be made equal by transforming all criticality values by raising them to the power a, where a is given by Equation 13. Similarly, if we want the effective weight of criticality to be k times that of frequency (where k is any positive value), we can set Equation 11 equal to k times Equation 12 and solve the resulting quadratic equation for a. Weights for the Test Plan The most obvious way to assign weights in the test plan, based on the importance of each activity, would be to make the weights proportionalto the estimated importance Wi= Ii. (14) Because the weights, interpretedas proportions,must sum to 1, the constant a in Equation 14 can be determined by setting the sum of the weights over all activities equal to 1 and solving for a, which yields a-=1/ , (15)

and the proportionalweight, Wi, assigned to thejth activity, would be equal to W=i- I,
EIi

(16)

where the value of the Ii's could be found from the basic model in Equation 2 or the more sophisticated model in Equation 8. A word of caution is appropriateat this point. The data generated by empirical job analyses, to which these analyses would be applied, are based on what practitioners say they are doing. In many cases, it can be argued that the distributionof effort in current practice is inappropriate,or that future needs will be different from current needs. Job analysis data describe what is, and not necessarily what should be. Such data can provide guidance in developing test plans and in designing educational programs,but should not be used mechanically. 22

This content downloaded from 131.94.16.10 on Sat, 25 Jan 2014 18:15:47 PM All use subject to JSTOR Terms and Conditions

andFrequency Criticality An Example In this section, the implications of adjusting criticality ratings to make their impact equivalent to that of the frequency ratings will be discussed in terms of a job analysis of the entry-level practice of registered nurses. The data reported in this section are derived from a job analysis study of registered nurses (Kane, Kingsbury, Colton, & Estes, 1986). As part of this study, responses from 1,375 newly licensed registered nurses who passed the NCLEX-RN (the licensing examination for registered nurses, preparedby the National Council of State Boards of Nursing) in July 1984 were analyzed. In addition to questions about work setting, educational background, and related topics, the participants were asked to rate 222 activities in terms of their frequency of performance and the criticality of the activity for client well-being (see Kane et al., 1986, for details on questionnaire development and sampling design). The range of values for the average frequencies of activities was relatively large: Some activities had very low frequencies (e.g., administering CPR), whereas other activities had high frequencies (e.g., taking vital signs). Therefore, the variability among activities in their average frequencies tended to be quite large. By contrast, the range of criticality ratings tended to be narrow. For the sample of newly licensed registered nurses, the variance of InCi was .084, the variance of InF, was .877, and their covariance was .057. Therefore, from Equations 5 and 6, the effective contributionof criticality was .141, and the effective contribution of frequency was .934. As expected, the effective contribution of frequency to the index of importance was much larger than the effective contribution of criticality. Using Equation 13, it was found that the constant a should be equal to 3.235 in order for the impact of frequency and criticality to be equal in determining importance. Assuming Ii = CQF,, Equation 16 can be used to estimate the unadjusted proportionalweight, Wi, assigned to any activity. Similarly, substituting I' = CiF for Ii, Equation 16 can be used to determine the adjusted proportionalweight, W;,assigned to any activity. The use of II with a = 3.235 instead of Ii increases the weighting of activities with relatively high criticality ratings and decreases the weights of activities with relatively low criticality ratings. Before examining data for task statements from the study, it may be useful to consider the effect of adjusting weights on four hypothetical tasks. In the study of entry-level registered nursing practice, the mean frequency for the 222 activities was 1.33, and the standard deviation was .83; therefore, an activity with an average frequency of .50 would be one standard deviation below the mean, and an activity with an average frequency of 2.16 would be one standard deviation above the mean. For criticality, the average was .73, and the standard deviation was .18; therefore, an activity with an average criticality of .55 would be one standard deviation below the mean criticality, and an activity with an average criticality of .91 would be one standard deviation above the mean. Table 1 contains the weights that would be assigned to four hypothetical activities if no adjustment were made in the criticality ratings (i.e., weights based 23

This content downloaded from 131.94.16.10 on Sat, 25 Jan 2014 18:15:47 PM All use subject to JSTOR Terms and Conditions

Kane,Kingsbury, Colton,andEstes Table 1 Weights


Using

Assigned

I = CF, and I'

to Four Hypothetical
= CaF with

Activities
a = 3.235

Average F 0.50 2.16

Average C = 0.55 0.071 0.306

I = CF Average C = 0.91 0.117 0.506

Average C = 0.55 0.031 0.133

I' CaF Average C = 0.91 0.157 0.678

on I = CF), and if the criticality ratings were adjusted to weight criticality and frequency equally (i.e., weights based on I' = CaF). Two of the four hypothetical activities have relatively low frequencies (one standard deviation below the mean, or F = .50), and two have relatively high frequencies (one standard deviation above the mean, or F = 2.16). For each value of frequency, one activity has low criticality (C=.55), and one has high criticality (C=.91). In computing the weights in Table 1, the values for I for the four activities were divided by the sum of the four values of I in order to obtain proportionalweights. Similarly, the values of I' were divided by the sum of the four values of I'. There are two general features of the two sets of weights in Table 1 that should be noted. First, the values of the adjusted weights, W', for the two low criticality activities are smaller than the correspondingvalues of the unadjusted weights, Wi,and the values of W, for the two high criticality activities are larger than the corresponding values of Wi. The adjusted weights, W', give more emphasis to criticality than the unadjusted weights, Wi. Second, for the unadjusted weights, the weight (Wi = .306) assigned to the high frequency-lowcriticality activity is almost three times as large as the weight (Wi = .117) for the low frequency-highcriticality activity. Because the impact of frequency on Wiis much larger than the impact of criticality, the weighting of the high frequency activities tends to be much larger than the weighting of low frequency activities, regardless of the criticality ratings. When the criticality scale is adjusted so that the impact of frequency and criticality on importance is equal, as in the right side of Table 1, the weight, W', of the high frequency-low criticality activity is roughly equal to the weight of the low frequency-high criticality activity. Table 2 presents percentage weights for five pairs of activities (from the study of entry-level RN practice) representing different values of frequency and criticality. (Note that the weights in Table 2 are percentage weights based on the 222 activities in the study. The weights and the adjusted weights for the 222 activities sum to approximately 100.) In each pair of activities, one activity has a relatively low frequency, and one has a relatively high frequency. The criticality 24

This content downloaded from 131.94.16.10 on Sat, 25 Jan 2014 18:15:47 PM All use subject to JSTOR Terms and Conditions

Criticality and Frequency

Table 2 to Ten Actual Activities Weights Assigned Percentage = = I' = I CaF and with a CF, 3.235 Using

Activity 219. choose Help clients recreational that fit activities their age and condition Weigh a client with Teach a client poor inter-personal to communicate skills more effectively Record observations of behavior that a client's indicate mood an Administer immunizing agent Suggest revising a discontinuing medication order

Activity C 0.39

Ratings F 0.63

Weights W W' 0.112 0.022

12. 191.

0.37 0.57

1.94 0.55

0.324 0.141

0.057 0.064

59.

0.54

2.18

0.532

0.219

28. 211.

0.76 or 0.76

0.59 1.91

0.205 0.653

0.180 0.573

14.

184. 97.

the impact Evaluate of therapeutic on a interventions client's potential for suicide Administer oxygen Assess the environment of a suicidal client for potential hazards Maintain for asepsis clients at risk

0.87

0.62

0.242

0.286

0. 88 0.95

2.24 0.66

0.885 0.283

1.068 0.412

96.

0.96

2.25

0.969

1.422

25

This content downloaded from 131.94.16.10 on Sat, 25 Jan 2014 18:15:47 PM All use subject to JSTOR Terms and Conditions

Kane,Kingsbury, Colton,and Estes ratings in each pair are approximately equal, but the pairs of criticality ratings increase from top to bottom in Table 2. The weights in Table 2 follow the pattern discussed earlier. For both sets of weights, the largest weights are for activities with high frequency and high criticality, and the smallest weights are for activities with low frequency and low criticality. For a given level of criticality, the activity with a higher frequency has a larger weight than the activity with a lower frequency. Likewise, for a given frequency level, the weights increase as a function of criticality. As was the case for the artificial data in Table 1, the major difference between the unadjusted weights, Wi, and adjusted weights, W,, can be seen most clearly by comparing high frequency-low criticality activities to low frequency-high criticality activities. The unadjusted weight, Wi,assigned to activity 12, "Weigh a client," which has a low criticality but a high frequency, is higher than Wifor activity 97, "Assess the environment of a suicidal client for potential hazards," which has a high criticality but a low frequency. This occurs because the unadjusted weights, Wi, depend mainly on frequency: Note that the activity, "Weigh a client," has a larger value of Withan any of the five activities with low frequencies, regardless of their criticality. By contrast, the adjusted weight, W', for activity 12, "Weigh a client," is much smaller than its unadjusted weight, Wi,because of its very low criticality rating. Similarly, the value of W\ for activity 97, "Assess the environment of a suicidal client for potential hazards," is larger than the correspondingvalue of W,because of its high criticality rating. Because of these changes, the adjusted weight for activity 97, which has low frequency and high criticality, is much larger than the adjusted weight for activity 12, which has low criticality and high frequency. Given the purpose of licensure, to protect the public, it is desirable that a licensure examination emphasize activities that would pose a serious threat to clients if they were omitted or done improperly. Therefore, it would seem appropriate to give the criticality ratings at least as much emphasis as the frequency ratings in evaluating overall importance, and this suggests that adjusted weights, rather than unadjusted weights, should be used. Conclusions In combining data on criticality and frequency as a basis for developing a licensure or certification examination, a multiplicative model would seem to be particularly appropriate.Criticality ratings provide estimates of the importance of the activity per occurrence, and frequency data provideestimates of the rate of occurrenceof the activity in practice. By multiplying criticality by frequency, the overall importance of the activity in practice can be estimated. In addition, the difference between the nominal weights of these two variables and their effective weights in determining estimates of importance needs to be considered. This is particularly true because the variability in average frequencies is likely to be much larger than the variability in criticality ratings in task analyses of professional practice. In this paper, we have outlined procedures for analyzing the effective weights of criticality and frequency in a multiplicative 26

This content downloaded from 131.94.16.10 on Sat, 25 Jan 2014 18:15:47 PM All use subject to JSTOR Terms and Conditions

Criticality and Frequency

model and for controlling the relative impact of frequency and criticality in estimating overall importance. References
American Psychological Association, American Educational Research Association, & National Council on Measurement in Education. (1985). Standards for educational and psychological testing. Washington, DC: American Psychological Association. Gael, S. (1983). Job analysis: A guide to assessing work activities. San Francisco:Jossey Bass. Jarjoura, D., & Brennan, R. (1982). A variance components model for measurement procedures associated with a table of specifications. Applied Psychological Measurement, 6, 161-171. Kane, M. (1982). The validity of licensure examinations. American Psychologist, 37, 911-918. Kane, M., Kingsbury, C., Colton, D., & Estes, C. (1986). A study of nursingpractice and role delineation and job analysis of entry-level performance of registered nurses. Chicago: National Council of State Boards of Nursing. McCormick, E. (1979). Job analysis: Methods and applications. New York: American Management Association. Rakel, R. (1979). Defining competence in specialty practice: The need for relevance. In Definitions of competence in specialities of medicine, conferenceproceedings. Chicago: American Board of Medical Specialties. Shimberg, B. (1981). Testing for licensure and certification. American Psychologist, 36, 1138-1146. Wang, M., & Stanley, J. (1970). Differential weighting: A review of methods and empirical studies. Review of Educational Research, 4, 663-705. Authors MICHAEL T. KANE, Senior Research Scientist, American College Testing Program, P.O. Box 168, Iowa City, IA 52243. Degrees: BS, Manhattan College; MA, State University of New York at Stony Brook; MS, PhD, Stanford University. Specialization: measurement theory. CAROLE KINGSBURY, Director of Test Construction, National League for Nursing, 10 Columbus Circle, New York, NY 10019. Degrees: BS, EdM, EdD, Columbia University. Specialization: evaluation in nursing education and practice. DEAN COLTON, Research Specialist, American College Testing Program, P.O. Box 168, Iowa City, IA 52243. Degrees: BS, MA, University of Iowa. Specialization: educational measurement. CARMEN A. ESTES, Director, Program Support and Research, Contract Services Area, Test Development Division, American College Testing Program, P.O. Box 168, Iowa City, IA 52243. Degrees: BSN, University of Maryland; MS, Pennsylvania State University; MEd, Towson State College; PhD, Pennsylvania State University. Specializations: test development, test specifications,job analysis, role delineation studies.

27

This content downloaded from 131.94.16.10 on Sat, 25 Jan 2014 18:15:47 PM All use subject to JSTOR Terms and Conditions

You might also like