Performance appraisals (PAs) require a lot of attention, and they get it. The problem is they never seem to get enough. For most organizations PA comes around once a year. And every year questions arise regarding the process – many of them quite familiar to Talent Management, “Should we use a 4-pt scale or a 5-pt scale?” It’s déjà vu all over again. It can make even the most skilled talent management professional begin to question their decisions.
Should we use a 4-pt scale or a 5-pt scale?
Here I’ve listed 10 of the most common challenges that arise when rolling out a PA process along with a response template based on affirmation, negation and a potentiality better way. They will be familiar to you. But when viewed outside of your own sandbox they can be amusing in a “rubber necking” sort of way. Additionally, they can reassure you that you’re not stuck in some talent management “do loop.”
- Should individuals rate themselves?
- Absolutely – including self ratings helps to calibrate evaluations between individuals and their boss; can provide information the boss doesn’t have; gives employees a voice in the process
- No way – some bosses will agree with the employee for the sake of peace; some bosses will adopt the employee’s rating out of laziness; it’s the boss’ call and they shouldn’t be bothered with predictably lenient self ratings
- How about we… – have employees rate themselves but without numbers to minimize trivial arguments over decimals, be clear about how their ratings will be used, hold bosses accountable for final ratings
- Should others be included in the rating?
- Absolutely – bosses aren’t the only stakeholder, other perspectives matter enough to solicit
- No way – bosses over-depend on or hide behind others’ ratings; creates a burden for some raters who have to rate many employees
- How about we… – communicate that others may be relevant to review, have bosses informally solicit others’ feedback but hold the boss accountable for final ratings
- Should we include objective results?
- Absolutely – objective results avoid problems with rater judgment; personal goals should align with organization goals; the specificity helps to motivate; they help distinguish busy versus productive employees
- No way – objective measures are difficult to attribute exclusively to the employee; objective targets can bring about counterproductive work behavior whereby the end justifies the means; objective goals can be difficult to identify for some employees; bosses can legitimately disagree with objective results and shouldn’t be forced to agree with a given metric
- How about we… – include objective evaluations along with behavioral evaluations for jobs where clear cause and effect exists between employee behavior and valued organization results; do so with full transparency
- Our strategic plan changed, should we change what gets evaluated?
- Absolutely – change happens, it’s more important to be relevant than consistent
- No way – changing targets mid-year is confusing, makes us look unsure and ignores what was once important
- How about we… – design the process to accommodate reasonable changes in performance standards at the organization level but not as a “one-off” for isolated individuals; consider mid-term evaluations for material changes in goals that still leave adequate time for review
- Should we use a common anniversary date?
- Absolutely – it improves calibration to rate everyone at the same time; improves consistency and communication across individuals
- No way – results for everyone don’t happen at the same time or on cue, it’s important to be timely with reviews; it takes time to get results, everyone should be given the same amount of time under review – including new hires
- How about we… – use common anniversary dates but supplement them with mid-term evaluations; recognize other talent processes such as succession planning in the performance appraisal cycle
- Should we use PA results for merit reviews?
- Absolutely – performance and compensation need to be clearly and formally linked; a separate process for merit review is inefficient and may be discordant
- No way – puts too much weight on internal equity without consideration of labor markets; some will use ratings to effect pay changes on their own terms instead of accurately appraising performance
- How about we… – use PA results as one source of input to merit reviews but not the sole determinant; ensure feedback providers understand exactly how merit and performance relate to each other
- Should we “fix” rater results that are clearly inaccurate?
- Absolutely – a fair process requires consistency of reviews between raters and justifies corrections
- No way – raters need to have the final say in their evaluations and should be the author of any changes
- How about we… – use rater training and procedural checks throughout the process to minimize outlier evaluations; communicate any changes to bosses and the broader review team
- Should we use an even number of performance gradations?
- Absolutely – it pushes raters “off the fence” of favoring mid-scale evaluations that don’t differentiate between employees
- No way – normal distributions do predict “average” ratings; it upsets raters not to have a midpoint evaluation when they are the ones giving the feedback
- How about we… – use an odd numbered scale for performance reviews where feedback is generally expected and bosses need the organizations’ confidence and support, use an evenly anchored scale for succession planning to generate differentiation for a process that doesn’t carry the same feedback responsibility
- Should we do succession planning and performance appraisals at the same time?
- Absolutely – performance and potential are necessarily linked, besides the ratings would be redundant; using one process is more efficient
- No way – PA ratings need to be based on past performance whereas succession planning ratings need to reflect projections, asking raters to do both at the same time is confounding
- How about we… – maintain distinct processes for PA and succession planning but openly reference each with the other when calibrating ratings between raters (this simplifies both tasks while maintaining independence)
- Should we use any and all available data?
- Absolutely – the more input available to final evaluations the better
- No way – some data comes at the expense to ethical standards and privacy
- How about we… – ensure that whatever data used for evaluation is well known and generally accepted as reasonable by those being rated; do not pry into private lives outside of work or spy on employees by incorporating any- and everything that could be measured; Overreaching is tempting in our “more is better” world but attitudes differ significantly between employee and employer about what’s fair for review. Performance appraisals don’t like surprises.
Performance appraisals don’t like surprises.
The most important thing that must be done with performance appraisals is to clarify and communicate exactly what will happen, when and to whom. The process must be well understood by everyone and it’s good practice to solicit and include input from organization members. Beyond being fair and professional, you must be perceived as fair and professional for the system to work as expected. And this isn’t a “one off” or isolated effort. It’s important to keep in mind that you very well may be repeating the show in the future so don’t act like you’ll never have to cross the same bridge again. Individuals will remember how they’ve been treated and aren’t typically shy about sharing this with you and others – performance appraisals get a lot of bad press.
As mentioned, no system is perfect, and these “fixes” won’t apply or work in every situation. They are offered as my recommendations based on an abstract, hypothetical model of performance appraisal. Specifics will largely depend on exactly why you have a PA process in the first place.
Performance appraisals are delicate, far-reaching and highly sensitive processes. A “little mistake” can have serious consequences. The referenced concerns have been posed as something of a “mock list.” They are not intended as prescription. Individual results will vary but the basic principles should translate to various situations.
Be safe and true.