When 5,000 resumes don’t get you what they used to: How to avoid “motivated hiring” in a demotivated labor market

How to avoid motivated hiring when workers are quiting in record numbers. This Burger King sign says it all.

As businesses reopen nearly as fast as they shut down due to the pandemic, the demand for workers has skyrocketed thereby creating risk for what I term "Motivated hiring." A reasonable assumption would be that most workers displaced by the pandemic will return to their previous employer and job. That’s not what’s happening. In fact, the reverse is.

Workers are quitting.

According to the Bureau of Labor Statistics, four and a half million people quit their jobs in November 2021 in what has been called, "The Great Resignation." This stubborn labor market isn’t just unmotivated, it’s actively demotivated. Estimates are that one in seven individuals comprising current unemployment numbers is not seeking (traditional) employment at all. Those 4.5M signing off in November? They didn’t just quit jobs, they quit work.

That’s A problem.

When people are voluntarily out of the workforce, traditional models of supply and demand falter. The things that used to work to find and employ talent don’t work as well. Five thousand resumes don’t get you what they used to (apologies for the sarcasm). So, what do organizations do when standard practices aren’t working?

They “double-down.”

Bigger signing bonuses, higher salaries, free beer (I am not making this up), organizations are doing “whatever it takes” to hire the talent they need to keep the business going. But in times of “whatever it takes,” sometimes “whatever" will do.

That’s THE problem.

And don’t think this is someone else’s problem. Not all 4.5M that quit left jobs in hospitality and healthcare, although the effects of the pandemic have been especially devastating to these sectors. Many more are simply NOT going back to the job they quit, or that quit them.

This isn't called, “The Great Resignation” for a couple of front line industry sectors. The quits span industry sectors, and all levels.

The Brutal Truth.

When people flat out quit without another job or plan, they quit to escape bad. Fixing bad is much harder for organizations than maintaining, or even adding, good.

When fixing wages, benefits, hours, etc. doesn’t work, hiring managers and organizations “fix” themselves. Without knowing it.

That's THE REALLY BIG problem.

Cognitive dissonance, or motivated reasoning, occurs when irrational thinking, or behavior is justified by adding more illogical thinking or behavior. Here’s an example of what that sounds like: “Sure, plastic is bad for the environment, but burning gas is worse and I drive a Prius -- besides, I planted a tree the other day.” Neither your Prius nor that tree you planted has anything to do with the plastic – it only makes it SEEM less bad. And you drive away in your toy.

“Motivated hiring” is the term I use for the organization- or individual-level hiring equivalent of cognitive dissonance. It’s not the same as panic hiring or working harder to hire. It’s the opposite.

Adding resources to your recruiting and hiring efforts is logical, if not necessarily productive in today’s labor market. It’s what happens when you can’t, or don’t want to fix things anymore that motivated hiring enters the situation. Without knowing it, individuals and organizations start to justify loose hiring practices. They start making excuses for hiring candidates that aren’t “fully” qualified. Hiring what you want to believe instead of what it is and being okay with that -- that’s motivated hiring.

At the individual level it may sound like, “… I can coach them.” At the organization level, it might be changing job titles to feel like hiring for something different, and differently.

It’s mind games, fooling ourselves, and it happens to the best of us when we’re not at our best.

It's a business decision.

Without flight attendants, planes don’t fly. Without teachers, schools don’t teach. Sure, that's business.

Interim hires, substitutes, contract hires – right?

These are the decisions that keep business going when there’s no better option. And it feels okay. Actually, it may even be a good “business decision.” Sometimes “business decisions” are a convenient way of passing blame.  “I know better, but you know, it's a 'business decision'.”

Read More

Warning signs about resilience and resilience-based HR practices

Warning signs about resilience: Closeup of teary-eyed black senior woman experiencing compromised resilience

Summary:

Resilience.

If your next book’s title doesn’t include this word, I recommend adding it. (Try, "The Resilience of Cooking"). If your hiring practices don’t include resilience, you're exceptional (and not in a good way). Resilience is THE thing of HR today.

Although I start with "tongue in cheek" language, I'm not at all flippant on this topic. Resilience is serious and I mean no disrespect. It's merely a matter of style. So, let me be clear:

This is NOT a repudiation of resilience.

Resilience is real. Great thinking has brought attention, understanding and sage counsel to the concept. I wholly support the construct for it's value to progressing organization theory and practice. Not by any fault of its origin or development, but for a number of reasons, I see warning signs about resilience risk and urge caution with use of resilience-based HR practices.

Specifically in the case of resilience, the risks of misunderstanding and misuse are greater than for previous super constructs ("Emotional Intelligence" comes to mind). The mere term, resilience, seems so relevant today that many have been, and others will be, drawn to its "solutions" like choosing a book by its cover. A book that has your name on it. Who doesn't want a resilient organization?

But these are the framing conditions that can rapidly lead to over dependence and over confidence with an apparently simple term that is more nuanced and potentially hazardous than it appears.

Ultimately, I urge you to consider what I see as early warning signs about resilience and its application in HR systems. You may disagree, and I may be wrong. But both the stakes and risks are high. And I'm comfortable to risk my reputation to raise awareness and stimulate deeper thought on this topic.

Read More

9 signs you might be using the wrong personality test

Stamp that reads Test Failed says you're using the wrong personality test

Personality testing is a big part of the way organizations make hiring decisions — it has been for a some time now (it wasn’t popular before about 1980). With advances in technology there has been a great proliferation of personality assessments. They’re not all good. A personality test is much easier to generate than it is to validate. This quiz, below, can help you to know if you’re using the wrong personality test. (Have some fun with it.)

Directions: The following list of paired statements(questions) reflects things I occasionally hear when folks are evaluating personality tests. For each pair, one response is more problematic when it comes to evaluating personality tests. Reflecting on your current situation, which of the two statements would I be most likely to hear from you or others if I were a fly on the wall when you were getting the pitch from your vendor?

Quiz to raise the question "Am I using the right personality test?"

Response Key: For all odd numbered pairs the problematic statement is in column A, for even numbered items the more problematic one is in column B.

Some of the statements do require more assumption than others, don’t get too caught up in the scoring. These are my answers and rationale:

  1. “It sure worked for me” — Frequently a personality test is sold by having the decision maker complete the assessment. This isn’t a bad thing — I encourage users to complete a personality test for themselves. The potential problem is that this is frequently the primary (or sole) evaluation criterion for a decision maker. Vendors know this and some hawk an instrument that produces unrealistically favorable results. “It says I’m good, therefore it must be right.” As for column B, the 300 page manual, good ones are typically lengthy. It takes some pulp to present all the evidence supporting a properly constructed inventory.
  2. “A type’s a type” – The most popular personality assessment of all, the MBTI, presents results for an individual as one of 16 types. Scores, to the extent that they are reported, only reflect the likelihood that the respondent is a given type or style – not that they are more or less extraverted, for example. But research and common sense say that personality traits do vary in degree, someone can be “really neurotic.” Two individuals with the same type can be quite different behaviorally based on how much of a trait they possess. A very extraverted person is different from someone who is only slightly extraverted — same type, different people. (No, I don’t condone mocking or calling out anyone’s score, as it would appear I’m suggesting in column A, but with a good test such a statement is potentially valid.)
  3. “That’s a clever twist” – Few personality tests are fully transparent to the respondent – this helps control the issue of social desirability. But some go too far with “tricky” scoring or scales. This is a problem in two ways: 1) if the trick gets out (google that) the assessment loses its value, and 2) respondents don’t like being tricked. It’s better to be fairly obvious with an item than to deal with {very} frustrated respondents who may just take you to court.
  4. “It was built using retina imaging” – Here’s another statement that needs a little help to see what’s going on (no pun intended). I’m not against new technology, it’s driving ever better assessment. But sometimes the technology is misused or inadequately supported with research. There’s a reason that some personality assessments have been around for more than 50 years. Validity isn’t always sexy.
  5. “That’s what I heard in a TED talk” — My intent here was to implicate “faddish” assessments. They may say they’re measuring the hot topic of the day, but more often than not, what’s hot in personality assessment, at least as far as traits are concerned, is not new. Research has concluded that many traits are not meaningfully different from ones that have been around a while. Don’t fall for an assessment just because you like the vocabulary, check the manual to see if it’s legitimately derived. There’s a reason that scientists prefer instruments based on the Big 5 traits (not the big 50).
  6. “Now that’s what I call an algorithm” — More complicated isn’t necessarily better. Some very good — typically public domain — assessments can be scored by hand. Tests that use Item Response Theory (IRT) for scoring, do have more complicated algorithms than tests scored via Classical Test Theory (i.e., more like your 3rd grade teacher scored your spelling test). Still, a three parameter IRT scoring method isn’t necessarily better than a one parameter model and it isn’t three times more complicated anyway. Proprietary assessments typically protect their copyright with nontransparent scoring, but for the most part what’s obfuscated or obscure is what items go into a calculation, not that the calculation is necessarily complex. Good assessments should employ fairly straightforward scoring to render both raw scores and percentile, or normed scores.
  7. “It really has big correlations” — As with some prior items a bit more context is needed to get the point I’m trying to make. Here the issue is sufficiency. Yes, a good instrument will show some relatively high correlations, but they need to be the right correlations. (And they need to be truthful. Unfortunately, I know of cases where misleading statistics have been presented. It helps to know about research design and to have a realistic expectation for that validity correlation. If the vendor tells you that their assessment correlates with performance above .40, make them prove it. (And a .40 correlation equates to a 16% reduction in uncertainty, not a 40% reduction. Sometimes vendors get this confused.)
  8. “It’s too long, let’s cut some items” – It’s tempting to simply eliminate irrelevant scales or items for your specific need. After all, you’re not touching the items that comprise the traits you want to know. The problem is that the assessment is validated “as is.” Both the length of an assessment and its contents can influence scores. Priming biases are one example of how items interact with each other. Anytime you modify an assessment it needs to be validated. This is typically the case for short forms of assessments (i.e., they’ve been specifically validated), so it’s fair to ask about this alternate form.
  9. “That’s amazing” — By now you should see that a common factor in my problem statements has to do with how much goes on “out of view” (less is better) and how thorough the test manual is. “That’s amazing” is for magic shows, not science (I realize I’m parsing semantics here – you get my point).

A personality test can be — and most often, is — a legitimate assessment for many (most) jobs. (This even applies to machines. Researchers are using a variation of personality inventories to manipulate the perceived personality of robots.) Without exception, it’s critical to ensure that any assessment be validated for specific use, but you want to start with something that has been thoroughly researched. If everything has been done right, you can expect local results to be in line with the manual (assuming your tested population isn’t that different from the test manual sample(s)).

A lot goes into validating a personality test and test manuals are lengthy. Although this is good and necessary for adequately evaluating the test, it can be used in intimidating or misleading ways. It’s easy for claims to be made out of context even if the manual is true, especially when decisions are made that affect one’s job. It’s important to review that test manual, not just the marketing brochure. (The good news is these manuals are boringly redundant. For example, the same figure is used for each scale, or trait, when repeating testing for gender bias.) Although I’m sure your vendor is a “stand up” person, you can’t rely on this fact if your process gets challenged in court. It pays to review the manual thoroughly.

I hope your personality inventory passed the test.

Psychways is owned and produced by Talentlift, LLC.