9 signs you might be using the wrong personality test

Stamp that reads Test Failed says you're using the wrong personality test

Personality testing is a big part of the way organizations make hiring decisions — it has been for a some time now (it wasn’t popular before about 1980). With advances in technology there has been a great proliferation of personality assessments. They’re not all good. A personality test is much easier to generate than it is to validate. This quiz, below, can help you to know if you’re using the wrong personality test. (Have some fun with it.)

Directions: The following list of paired statements(questions) reflects things I occasionally hear when folks are evaluating personality tests. For each pair, one response is more problematic when it comes to evaluating personality tests. Reflecting on your current situation, which of the two statements would I be most likely to hear from you or others if I were a fly on the wall when you were getting the pitch from your vendor?

Quiz to raise the question "Am I using the right personality test?"

Response Key: For all odd numbered pairs the problematic statement is in column A, for even numbered items the more problematic one is in column B.

Some of the statements do require more assumption than others, don’t get too caught up in the scoring. These are my answers and rationale:

  1. “It sure worked for me” — Frequently a personality test is sold by having the decision maker complete the assessment. This isn’t a bad thing — I encourage users to complete a personality test for themselves. The potential problem is that this is frequently the primary (or sole) evaluation criterion for a decision maker. Vendors know this and some hawk an instrument that produces unrealistically favorable results. “It says I’m good, therefore it must be right.” As for column B, the 300 page manual, good ones are typically lengthy. It takes some pulp to present all the evidence supporting a properly constructed inventory.
  2. “A type’s a type” – The most popular personality assessment of all, the MBTI, presents results for an individual as one of 16 types. Scores, to the extent that they are reported, only reflect the likelihood that the respondent is a given type or style – not that they are more or less extraverted, for example. But research and common sense say that personality traits do vary in degree, someone can be “really neurotic.” Two individuals with the same type can be quite different behaviorally based on how much of a trait they possess. A very extraverted person is different from someone who is only slightly extraverted — same type, different people. (No, I don’t condone mocking or calling out anyone’s score, as it would appear I’m suggesting in column A, but with a good test such a statement is potentially valid.)
  3. “That’s a clever twist” – Few personality tests are fully transparent to the respondent – this helps control the issue of social desirability. But some go too far with “tricky” scoring or scales. This is a problem in two ways: 1) if the trick gets out (google that) the assessment loses its value, and 2) respondents don’t like being tricked. It’s better to be fairly obvious with an item than to deal with {very} frustrated respondents who may just take you to court.
  4. “It was built using retina imaging” – Here’s another statement that needs a little help to see what’s going on (no pun intended). I’m not against new technology, it’s driving ever better assessment. But sometimes the technology is misused or inadequately supported with research. There’s a reason that some personality assessments have been around for more than 50 years. Validity isn’t always sexy.
  5. “That’s what I heard in a TED talk” — My intent here was to implicate “faddish” assessments. They may say they’re measuring the hot topic of the day, but more often than not, what’s hot in personality assessment, at least as far as traits are concerned, is not new. Research has concluded that many traits are not meaningfully different from ones that have been around a while. Don’t fall for an assessment just because you like the vocabulary, check the manual to see if it’s legitimately derived. There’s a reason that scientists prefer instruments based on the Big 5 traits (not the big 50).
  6. “Now that’s what I call an algorithm” — More complicated isn’t necessarily better. Some very good — typically public domain — assessments can be scored by hand. Tests that use Item Response Theory (IRT) for scoring, do have more complicated algorithms than tests scored via Classical Test Theory (i.e., more like your 3rd grade teacher scored your spelling test). Still, a three parameter IRT scoring method isn’t necessarily better than a one parameter model and it isn’t three times more complicated anyway. Proprietary assessments typically protect their copyright with nontransparent scoring, but for the most part what’s obfuscated or obscure is what items go into a calculation, not that the calculation is necessarily complex. Good assessments should employ fairly straightforward scoring to render both raw scores and percentile, or normed scores.
  7. “It really has big correlations” — As with some prior items a bit more context is needed to get the point I’m trying to make. Here the issue is sufficiency. Yes, a good instrument will show some relatively high correlations, but they need to be the right correlations. (And they need to be truthful. Unfortunately, I know of cases where misleading statistics have been presented. It helps to know about research design and to have a realistic expectation for that validity correlation. If the vendor tells you that their assessment correlates with performance above .40, make them prove it. (And a .40 correlation equates to a 16% reduction in uncertainty, not a 40% reduction. Sometimes vendors get this confused.)
  8. “It’s too long, let’s cut some items” – It’s tempting to simply eliminate irrelevant scales or items for your specific need. After all, you’re not touching the items that comprise the traits you want to know. The problem is that the assessment is validated “as is.” Both the length of an assessment and its contents can influence scores. Priming biases are one example of how items interact with each other. Anytime you modify an assessment it needs to be validated. This is typically the case for short forms of assessments (i.e., they’ve been specifically validated), so it’s fair to ask about this alternate form.
  9. “That’s amazing” — By now you should see that a common factor in my problem statements has to do with how much goes on “out of view” (less is better) and how thorough the test manual is. “That’s amazing” is for magic shows, not science (I realize I’m parsing semantics here – you get my point).

A personality test can be — and most often, is — a legitimate assessment for many (most) jobs. (This even applies to machines. Researchers are using a variation of personality inventories to manipulate the perceived personality of robots.) Without exception, it’s critical to ensure that any assessment be validated for specific use, but you want to start with something that has been thoroughly researched. If everything has been done right, you can expect local results to be in line with the manual (assuming your tested population isn’t that different from the test manual sample(s)).

A lot goes into validating a personality test and test manuals are lengthy. Although this is good and necessary for adequately evaluating the test, it can be used in intimidating or misleading ways. It’s easy for claims to be made out of context even if the manual is true, especially when decisions are made that affect one’s job. It’s important to review that test manual, not just the marketing brochure. (The good news is these manuals are boringly redundant. For example, the same figure is used for each scale, or trait, when repeating testing for gender bias.) Although I’m sure your vendor is a “stand up” person, you can’t rely on this fact if your process gets challenged in court. It pays to review the manual thoroughly.

I hope your personality inventory passed the test.

Psychways is owned and produced by Talentlift, LLC.

The top 5 reasons succession planning goes wrong and how to fix them

Succession planning org chart with person icons

Succession planning may be – no – it IS the most important job of executive leadership. The critical aim of this work is to ensure leadership continuity by identifying individuals with the highest potential to fill key positions in an organization. This is work that affects more than just the future of individuals’ careers, it affects the fate of the entire organization. I have literally seen a company’s stock price swing more than 10% in a day when news about executive position replacements gets out. Even in moderately large organizations billions of dollars can be at stake when it comes to answering the question, who will lead? As such, succession planning represents possibly the highest stakes of all executive assessment. Unfortunately, most organizations are really bad at succession planning. And more often than not, those stock prices swing lower rather than higher based on news of new leadership. Maybe the investors are right.

Succession planning is typically construed as good defense. In order to ensure leadership continuity, a list of individuals most ready to backfill a given job is prepared so that in the event of an open position (typically unanticipated) a succession of leadership changes can be made. Backfills are made not just for the open position but for the “domino effect” that cascades through the organization based on even one or two key moves. While this may be a good replacement plan for key executives, it’s bad for true, strategic organizational succession planning. It’s like looking in the rearview mirror in order to go forward – you might just run over someone and you won’t get where you want to go.

Let’s examine some of the most challenging realities that plague most succession planning efforts.

Succession Planning - Done Wrong

  1. It’s based on backwards thinking.

The typical exercise involves identifying the next in line, i.e., "backfill," for a job that opens up, usually due to an executive departure from the organization. While this may be a good way to stay where you are as an organization, your competition is going forward at full speed. The error here is replicating what you’ve had versus positioning what you’ll need.

  1. It’s driven by those who need a successor.

This problem applies more broadly than succession planning. From a personal point of view, the assumption here is that if I win the lottery, then my groomed successor will replace me. Wrong. If you leave the organization, you most definitely won’t be the one making key executive moves – you’re not even around. The most likely person to make any backfill is the person to whom that position needing a backfill reports, not the one in the position. For this reason, it’s imperative that executives know not just their direct reports, they need to know the employee population at least two levels beneath them.

Guess what? I have facilitated numerous succession planning efforts where executives have no idea who reports to their direct reports. Photos don’t even jar their memory (and can be controversial in this context). “You rode up on the elevator with them.” Still don’t know them.

  1. It’s based on the strongest of psychological biases.

Too many positions are filled based on the “like me” method. Naturally, we’re wired to think that we are exactly what “my” position needs, therefore I am looking for a “mini-me.” Well, you may think you’re at the center of the universe (face it, we all do), but if you ask others, you’ll get a very different point of view. Others in the organization may not want your backfill to be a mini you. That’s a good perspective to cultivate but it’s almost impossible when you’re in the room. This is why politics play such a strong role in most succession planning.

  1. It’s personal, not organizational.

This is another bias that inserts itself in the succession planning process. Leaders are VERY sensitive about “their people.” In fact, a leader oftentimes acts as though “their people” are just like family members – and sometimes THEY ARE, but this is a whole other concern not to be addressed here. Regardless, they aren’t “you’re people,” they’re the organization’s people.

  1. It’s based on flawed judgement.

Even for the few occasions that I have someone tell me they’re a poor judge of people, guess who weighs in on talent to fill open positions? Yep, everyone has a point of view when it comes to selection. And the closer that selection is to the individual, the stronger their judgement gets.

Studies consistently find human judgement to be a bad predictor of actual talent. If only those who are right when they admit that they’re a poor judge of talent actually deferred to more objective, scientific means of assessment. But they don’t. Sometimes the best you can do is to present decision makers with well-designed psychometric instruments that do make accurate assessments and hope that reasoned, versus inferred judgement prevails. This works best when the judge knows a bit about how the given psychometric tools work. In many cases, science will make an impact. You’ve got to take the magic out of the assessment and encourage those who “lean in” to a better way.

Succession Planning - Done Right

  1. Think of succession planning as progression planning.

Instead of priming defensive and myopic mindsets with terms like “succession,” “my successor” and “backfill” use terms like “progression,” “strategic,” “organization,” and “future fill.” This can even help with the personal biases as you and history are intrinsically bound. (See #s 2, 3, 4 and 5) Good succession planning isn’t possible without good strategic planning. Your talent for the future should look like what you need in the future, not what you’ve had in the past.

  1. Have leaders discuss talent at least two levels below them.

The first time you do this you may find yourself in a circular loop, “we can’t talk about the talent because we don’t know this talent” meets, “we don’t know this talent because we’ve never talked about this talent.”

That’s actually a good start. When leaders admit they need deeper insight you have the opportunity to improve on those shallow evaluations. Ignorance can be your saving grace! I’d much rather work with a leader that “doesn’t know everything” and is right about that than one who’s confident in their wrongful thinking. Now’s a good time to introduce better assessments and more strategic thinking.

  1. Train leaders in good assessment and talent management.

This is a big deal. You have to take the “like me” person out of assessment. Otherwise you have the old cliché, “when you’re a hammer, the world looks like a nail.” And since diversity and inclusion are nowhere near where they need to be in organizations – especially at the senior most levels, you need the seasoned group of executives to really recognize and know talent that isn’t at all like them. But good, accurate, assessment is hard and typically counter intuitive. Still, it’s not impossible to have a leader acknowledge that their best replacement won’t look like them.

  1. Ensure leaders discuss not only “their” function, make them responsible for all of the organization's functions.

Leaders think in their silos and don’t want others messing with their kingdom. That’s all wrong. You need to open up and break personal “myndsets” and create organizational mindsets. After all, these are individuals entrusted with the future of the organization – not just one function or group. By getting leaders to talk about talent in other groups you also improve the likelihood of cross-functional moves. These are critical to effective succession planning as they work to create organization leaders versus expert leaders. Well-rounded talent knows more than accounting.

  1. Use properly validated assessments.

Study after study show that good psychometrics beat good assessors. While there are exceptions, you aren’t one of them. Moreover, research finds that “good assessors” primarily are good at assessing specific characteristics or traits – but not all. A comprehensive set of psychological assessments used by an expert in workplace psychology should be mandatory for proper succession planning. Furthermore, studies show that training assessors with the framing reference of properly validated psychometrics actually improves their personal evaluations.

Good succession planning shouldn’t be a blind date. Open leadership’s eyes to the talents of new, unknown talent and give them the tools to truly know that talent. Only by clarifying what’s needed in the future for the organization can you break some of psychology’s strongest biases to truly ensure organization continuity AND progress.

Psychways is owned and produced by Talentlift, LLC.