You are on page 1of 9

Definition performance appraisal

In this file, you can ref useful information about definition performance appraisal such as
definition performance appraisal methods, definition performance appraisal tips, definition
performance appraisal forms, definition performance appraisal phrases If you need more
assistant for definition performance appraisal, please leave your comment at the end of file.
Other useful material for you:
performanceappraisal123.com/1125-free-performance-review-phrases
performanceappraisal123.com/free-28-performance-appraisal-forms
performanceappraisal123.com/free-ebook-11-methods-for-performance-appraisal

I. Contents of getting definition performance appraisal


==================
In my most recent article, I addressed the subject of critical elements. These are the rating
categories used in performance evaluation systems. I pointed out how generic elements and
elements that presume to rate competencies inevitably lead to subjective standards and ratings.
Such evaluations do not serve to improve performance and, therefore, are of no particular benefit
to the agencies using them. Moreover, rating these elements often leads to allegations of bias
and favoritism.
In this article I want to discuss performance standards the measures used to evaluate each
critical element. In doing so, readers can discover what I believe is a fundamental law of
performance evaluations and the most common reason appraisals have failed us over decades.
Law, regulations, and generics
Standards are often referred to as measures or yardsticks. They inform employees of the
expectations to be met in a given element. Our Office of Personnel Management (OPM)
oversees the governments performance appraisal system. In the Code of Federal Regulations (5
CFR 430.203), they define performance standards as follows:
Performance standard means the management-approved expression of the performance
threshold(s), requirement(s), or expectation(s) that must be met to be appraised at a particular
level of performance. A performance standard may include, but is not limited to, quality,
quantity, timeliness, and manner of performance.

The idea was to have individually written performance standards (or expectations) for each
critical element. This challenges managers to come up with specific measures for specific jobs
under their leadership. In fact, the law itself (5 USC 4302) reads, in part:
Under regulations which the Office of Personnel Management shall prescribe, each
performance appraisal system shall provide for establishing performance standards which will, to
the maximum extent feasible, permit the accurate evaluation of job performance on the basis of
objective criteria (which may include the extent of courtesy demonstrated to the public) related
to the job in question for each employee or position under the system
This isnt happening in most Federal agencies. Many HR departments have devised generic or
benchmark standards for rating purposes. Their intent is to relieve supervisors of the chore of
defining successful performance. They also believe that providing everyone with the same rating
elements and standards gives an appearance of fairness and equity despite the legal mandate
for objectivity.
Benchmarks and guesswork
Pre-written performance standards are intentionally vague so that they can be used by any
supervisor for any critical element and any job government job from Fork Lift Operators to
Research Scientists. Such canned standards inevitably lead to subjective ratings, which might
have been the same had there been nothing at all in writing.
Most of the supervisors and managers I meet (from State, Navy, Agriculture, Interior, Army, etc.)
are trying to rate their workforce fairly and accurately. Despite good intentions, however, their
ratings are based only on anecdotal evidence and little of that in most cases. The generic
benchmarks encourage subjective evaluations. Some agencies attempt to describe high, middle,
and low levels of achievement; however, the language they use must be interpreted and applied
by supervisors who are often baffled by it.
Standards as they were meant to be
In seminars, I ask managers, who are required to develop their own standards or supplement the
generics with their own criteria, to consider the first three techniques offered up by OPM in the
Code of Federal Regulations quality, quantity, and timeliness. These are the traditional
outcome-based measures of performance that senior managers, consultants, and HR specialists
continue to advocate.
I ask seminar participants (and now you) to think of all the ways a real supervisor would actually
know if an employee is delivering quality, quantity and/or timeliness. Examples of performance

measurement techniques they suggest are: supervisory inspection/observation; customer


feedback; coworker feedback; review of logs; and returns/rework. Of course there are more.
Take another moment to consider ways a supervisor could know if you or another employee is
delivering results quality, quantity and/or timeliness. This is what management is supposed to
do to have objective appraisals.
Note to HR specialists: Something becomes noticeable during the course of this exercise
perhaps for the first time. Quantity of work performed may prove utterly unreliable as a
measure. As managers try to come up with real-world measures for quantity, they commonly
give up.
Quantity usually depends on external factors such as what other jobs are being assigned to the
employee, how many requests for work come through the door or across an employees desk.
Moreover, different assignments have different levels of complexity. This makes quantity
measures particularly unreliable.
There is also the obvious connection between the quantity of work performed and the time it
takes to perform it and the more quantity and timeliness are stressed, the worse quality is
likely to be. As the old saying goes, The faster I work, the behinder I get. Measuring
outcomes has many implications that need to be considered by those who develop performance
standards.
and now, the Commandment
Beyond the obvious problems with performance measurement, there is this one: every technique
or idea you might imagine for assessing quality, quantity, and/or timeliness leads to a significant
workload for the front-line supervisor. When evaluating results using any of these measures,
s/he must regularly observe and log individual performance throughout the year. Whats more,
records must be kept on each individual supervised in each critical element.
This understanding leads to the First Commandment of Performance Appraisal: If thou
attemptest to rate employees in terms of Quality, Quantity, and/or Timeliness thou shalt use
metrics that require thee to observe, log, and keep book for every employee in every critical
element.
Supervisors who lack elaborate reporting software (such as that used in call centers) dont like
reading this. The Commandment, however, doesnt end there. Theres a corollary which
explains why agencies use benchmark and generic standards: If thou cannot or will not keep a
bean count on individual employee performance in several critical elements, thou shalt rate thy
employees subjectively.

Where metrics arent readily available, then a beauty contest is the inevitable result. Some
supervisors keep evaluation notes throughout the year. Most dont nor do they have any
memory of when the last rating year ended and this one began. Ratings become guesswork.
Get real! Get in GEAR!
Managers tell me that there arent enough hours left in a day to observe, log and keep individual
records on each employee. The say that time for evaluating employee performance comes after
the meetings, special projects, and reports that government demands of them. Moreover, they
find bean counting (of deadlines met/missed, errors, etc.) demeaning and/or distasteful. Most
managers I meet question whether their agencies would actually benefit from such an investment
of time and energy.
The National Council on Federal Labor-Management Relations is grappling with these issues
with its GEAR model now being piloted in several Federal agencies. GEAR (Goals,
Engagement, Accountability, and Results) is initially focusing on the infrastructure needed to rate
workers in results. This includes: regular supervisory feedback; holding supervisors accountable
for making appraisals a priority; serious training in performance management; and reconsidering
the selection process for new supervisors/managers. I commend them for putting the horse in
front of the cart, and wish them luck with such an ambitious undertaking.
Its a Commandment fess up to it!
By my reckoning, if evaluations are to prove useful, the Commandment needs to be
acknowledged by OPM, senior management, the National Council, and Chief Human Capital
Officers who are responsible for making the law and regulations work. For decades, front-line
supervisors have on the receiving end of rhetoric regarding results-oriented and objective
measures without sensing a commitment from those at the top to do so themselves.
Those who advocate for objective and results-oriented standards need to explain how and why
supervisors and managers should adhere to the Commandment. Those who have tried to
quantify the work of Economists, Electricians, Biologists, and Law Enforcement Officers have
been frustrated for years. In some cases it seems as if senior management and HR are more
focused on finding something to measure than on actual job performance.
Heres the Commandment again: If thou attemptest to rate employees in terms of Quality,
Quantity, and/or Timeliness thou shalt use metrics that require thee to observe, log, and keep
book for every employee in every critical element. Now consider a sign that hung in Albert
Einsteins office in Princeton: Not everything that counts can be counted, and not everything
that can be counted counts. I think theres wisdom there.

Subjectivity reigns
In light of the commandment, it might be better to simply acknowledge that employee
performance ratings will commonly be subjective. Where metrics are available and work to
motivate, go ahead and use them. Insisting on specific, objective, measurable, outcome-driven
standards where such data is unlikely to be harvested puts too many supervisors in the awkward
position of fudging.
The bond of trust between a supervisor and subordinate at the workplace (where submarines are
repaired, veterans treated, forest fires suppressed, contracts examined, roads designed, and
nuclear materials safeguarded) is paramount. It is jeopardized when management fails to
practice what is preached. Insisting that metrics reign and performance ratings are science rather
than art is one area where rhetoric and reality dont match up.
Gathering data, analyzing results, and aiming for continuous improvement is a worthy endeavor.
As a management practice it can help all of us see where we are and where we want to go.
Metrics relating to quality, quantity, and timeliness are at the heart of most management
philosophies like MBO, SQC, FTF, and TQM. But the First Commandment of Performance
Appraisals hasnt been followed by managers in most agencies. Thats why we see so many
canned standards.
Offering up benchmarks and generics, while insisting on results-driven performance
standards, isnt fooling anyone. It will take honest adults to recognize and acknowledge the
contradiction. Perhaps the National Council on Federal Labor-Management Relations can help in
this regard. A little candor might go a long way.
==================

III. Performance appraisal methods

1. Essay Method

In this method the rater writes down the employee


description in detail within a number of broad categories
like, overall impression of performance, promoteability
of employee, existing capabilities and qualifications of
performing jobs, strengths and weaknesses and training
needs of the employee. Advantage It is extremely
useful in filing information gaps about the employees
that often occur in a better-structured checklist.
Disadvantages It its highly dependent upon the writing
skills of rater and most of them are not good writers.
They may get confused success depends on the memory
power of raters.

2. Behaviorally Anchored Rating Scales


statements of effective and ineffective behaviors
determine the points. They are said to be
behaviorally anchored. The rater is supposed to
say, which behavior describes the employee
performance. Advantages helps overcome rating
errors. Disadvantages Suffers from distortions
inherent in most rating techniques.

3. Rating Scale
Rating scales consists of several numerical scales
representing job related performance criterions such as
dependability, initiative, output, attendance, attitude etc.
Each scales ranges from excellent to poor. The total
numerical scores are computed and final conclusions are
derived. Advantages Adaptability, easy to use, low cost,
every type of job can be evaluated, large number of
employees covered, no formal training required.
Disadvantages Raters biases

4. Checklist method

Under this method, checklist of statements of traits of


employee in the form of Yes or No based questions is
prepared. Here the rater only does the reporting or
checking and HR department does the actual evaluation.
Advantages economy, ease of administration, limited
training required, standardization. Disadvantages Raters
biases, use of improper weighs by HR, does not allow
rater to give relative ratings

5.Ranking Method
The ranking system requires the rater to rank his
subordinates on overall performance. This consists in
simply putting a man in a rank order. Under this method,
the ranking of an employee in a work group is done
against that of another employee. The relative position of
each employee is tested in terms of his numerical rank. It
may also be done by ranking a person on his job
performance against another member of the competitive
group.
Advantages of Ranking Method
Employees are ranked according to their
performance levels.
It is easier to rank the best and the worst
employee.
Limitations of Ranking Method
The whole man is compared with another
whole man in this method. In practice, it is very difficult
to compare individuals possessing various individual
traits.
This method speaks only of the position where an

employee stands in his group. It does not test anything


about how much better or how much worse an employee
is when compared to another employee.
When a large number of employees are working,
ranking of individuals become a difficult issue.
There is no systematic procedure for ranking
individuals in the organization. The ranking system does
not eliminate the possibility of snap judgements.

6. Critical Incidents Method


The approach is focused on certain critical behaviors of
employee that makes all the difference in the
performance. Supervisors as and when they occur record
such incidents. Advantages Evaluations are based on
actual job behaviors, ratings are supported by
descriptions, feedback is easy, reduces recency biases,
chances of subordinate improvement are high.
Disadvantages Negative incidents can be prioritized,
forgetting incidents, overly close supervision; feedback
may be too much and may appear to be punishment.

III. Other topics related to Definition performance appraisal


(pdf, doc file download)
Top 28 performance appraisal forms
performance appraisal comments
11 performance appraisal methods
25 performance appraisal examples
performance appraisal phrases
performance appraisal process
performance appraisal template
performance appraisal system
performance appraisal answers
performance appraisal questions

performance appraisal techniques


performance appraisal format
performance appraisal templates
performance appraisal questionnaire
performance appraisal software
performance appraisal tools
performance appraisal interview
performance appraisal phrases examples
performance appraisal objectives
performance appraisal policy
performance appraisal letter
performance appraisal types
performance appraisal quotes
performance appraisal articles

You might also like