Don’t think you can control your emotions? You’re probably right – and it’s affecting your “batting average”

baseball striking bat under high velocity to illustrate the placebo effect

When I was about 10 years old, my dad gave me and my brother a baseball lesson. Specifically, we practiced hitting the ball. The lesson Dad gave was the same for my brother as it was for me, but the results couldn’t have been more divergent. From that day on my brother became a “slugger” and I, a “striker.” If you don’t think you can control your emotions, you’re probably right, and you’ll likely become a striker like me. The placebo effect from drug studies may help, though.

A Little Difference with Big Results

What can science tell us about how my brother elevated his game and I tanked mine? (This is the scientific equivalent of, WTF?) All we know now is that something different happened for my brother and me. It turns out that a simple belief is likely to credit for our diverging batting averages – a belief that is within personal control but not fully controllable by everyone. Huh? Why? How?

A significant component of dad’s instruction addressed the bat – or at least implied its role in getting hits.

You’ve got to Believe

“You may be wondering: how are you going to hit a round ball with a round bat?” Dad posed.

“Yeah, Dad, I was wondering that very thing, HOW CAN I hit a round ball with a round bat?”

This turned out to be a key question and, I believe, THE pivotal condition that put my brother on a path to playoff-bound teams whereas I was never able to get my baseball career to first base.

“You don’t. You hit the ball on the flat side of the bat." Dad encouraged us as he guided our fingers over the barrel of the bat.

“Here, feel here. Here you can feel the flat side of the bat. If you swing this flat side at the ball, you will get more hits.”

The Placebo Effect

My dad was counting on a powerful psychological condition well known in the field of pharmacology – The Placebo effect.

It didn’t entirely work out as Dad planned – at least, not for me.

My brother claimed to feel the flat side of our shared Louisville Slugger. Armed with the conviction that bat and ball actually are designed for hits not strikes my brother saw an immediate improvement in his hitting. I, on the other hand, did NOT feel the flat side of the bat, and, did NOT experience better batting. In fact, now convinced that the flat side of the bat (that doesn’t exist) was THE (missing) KEY to getting hits, I was barely able to make ball contact at all. The easiest explanation for the sudden divergence of my brother’s and my batting was that belief that holding the bat a certain way that favored its “flat side” would lead to more hits, or not.

When a Placebo Becomes a Primary Variable -- i.e., a Big Deal

Here we have the experimental design of the placebo effect. By encouraging my brother and me to “feel” the flat side of the bat (which doesn’t really exist) my dad hoped to establish the critical belief that hits were possible if only one swung with the right side of the bat facing the ball. Confidence in this belief (I know – an oxymoron, "confident belief") was figured to cause an increase in hits as a result. Given this was the only identifiable difference between my brother and myself, belief in one’s potential determined hits. My brother prospered in his newfound belief about the difficulty of the task. But what happened to my placebo effect?

The placebo effect is well known in pharmacological research, or drug studies. This is the standard “psychology only” condition for virtually every drug entering the market. To test the possibility that merely believing in the efficacy of a given treatment has a significant effect on its results beyond any biological agent, the new drug is tested against a placebo condition where no drug is administered to a control group. This simple design has arguably yielded more advances in pharmacological and psychological research than just about any other phenomenon. It turns out that the placebo effect is not only present in just about every drug trial, it's strong, rivaling the physiological effect of many new drugs.

Prove It

How important is the placebo effect to psychological research?

Critical. And in more ways than one.

In fact, THE primary question in psychological research is whether or not a treatment condition is significantly more effective than no treatment at all. This is the tested assumption of the null hypothesis which is the bedrock of experimental design. As an inferential, data-driven science, the job of the researcher is to disprove the possibility that nothing happened. Placebos are a staple of pharmacological research aimed at rejecting the null hypothesis that nothing happened in favor of the presented alternative. This alternative account of results isn’t proven true, the hypothesis of no effect is simply proven to be relatively improbable as compared to the hypothesized effect. In this regard, properly scientific psychological research seeks to prove that “nothing” is an inferior explanation to the alternative hypothesis.

Beyond the Placebo Effect

In psychological research, the placebo effect goes beyond the simple issue of whether a given effect is due merely to the non-treatment condition or the presence of some stimulus (e. g., taking a pill). Here the matter applies as much to independent psychological mechanisms (i.e., variables and their nature of influence) as much it does to the simple question of whether or not any effect is present. A placebo effect holds out the possibility that a given variable may have a more insightful role in behavior than serve simply as a placebo.

The possibility THAT something (oftentimes a psychological variable) can influence study results begs the question: “HOW?”

When a placebo advances in research from the fact that it has SOME kind of effect on results to the specific mechanism(s) of the “placebo” the placebo becomes a key variable for study beyond the original focal variable(s). Science turns to addressing HOW the former placebo works instead of asserting THAT it exists. This is when the placebo becomes an independent variable with a specific mechanism of action. This is when powerful psychological insights are made -- insights that aren’t immediately written off to a placebo effect, but rather depend on the main effect of a placebo effect-like psychological condition.

For our hitting practice, belief in a flat side of a bat minimized the negative attitudinal, or motivational effects underlying a known difficult task -- hitting a round ball with a round bat. The change of attitude associated with our evolving placebo effect emulates a well-researched condition known as cognitive reappraisal.

Typically, this emotional motivation is deliberately and noticeably manipulated via an explicit experimental condition in which participants are guided through the act of cognitive reappraisal. Such an act is not necessary in this case because motivation is already provided by the goal of getting hits. The only thing necessary is to manipulate the participant’s belief in their ability to hit the ball.

That’s not funny

The punchline to an old psychology joke goes, … “one, but the lightbulb has got to WANT to change.” The common understanding is that beliefs are precursors to acts and that any change in action requires/carries a change in causal/supporting beliefs. In most cases cognitive reappraisal is triggered directly by asking a study participant to consider the emotions (or beliefs, in our example) associated with the task in a new light. By reframing an emotional state this way an individual can manage the emotional impact of a situation so as to have less of a negative impact on immediate performance. In this case the motivation to perform is assumed rather than directly manipulated. Here, motivation depends upon beliefs about the difficulty of the task. As these beliefs are enhanced, the motivating attitudes are similarly predicted to change.

Reaping value

So – how can one get value out of this insight?

Wanting to change isn’t the same thing as believing one can, but it is a measurable and influential effect strongly predictive of being able to change. In this case, wanting to get hits is a motivational condition preceding the act of hitting the ball AND resulting from the consequence of getting hits. Therefore, managing one’s motivation for a task has the potential to enhance task performance. But how do you do this?

We’ve seen one good example for how to manage your emotions already – cognitive reappraisal. This is the equivalent of hitting the “reset button” to current thinking and concomitant feelings. By changing the emotive nature of a task we change its desirability and increase(decrease) its motivation. Another means of emotions management is via mindfulness meditation. I write about my personal experience on a week-long silent mindfulness meditation here.

In conclusion

  1. Attitudes matter. They influence motivation which has a corresponding influence on task performance by framing expectations/beliefs.
  2. Motivations matter. They are a form of attitude (which already matters) that can be deliberately controlled by adapting and associating various emotional effects/influences from one situation to another. In our batting example this was accomplished by changing beliefs about the probability of a successful/desired performance.

“It” may all be in your head – but there’s no guarantee that you will have control over “it.” However, if you cannot control it, then it will control you.

Psychology at work – it’s more important than you think!

What your Personality Test Report says about You

Business man's hand plotting people's personality test report scores on a grid

People are frequently amazed at the accuracy of their personality test report. These reports can be powerfully enlightening as they describe an individual’s tendencies and character traits from what appears to be an objective point of view. When given the opportunity to review their report, I haven’t had one person defer. Everyone wants to know what their report says about them – whether they agree with it or not.

But sometimes personality test results are misleading and of no use at all. And it happens more often than you’d think.

In an experiment with college sophomores, a traditional favorite for academic researchers, the accuracy of personality tests was put to its own test. Following completion and scoring of a personality test given to all of the students in the class, the researcher asked for a show of hands from those for whom the test report accurately described them. A sizeable majority of hands went up – the report was an accurate depiction. There’s one thing they didn’t know:

Everyone got exactly the same report.

Yep. {I wish I’d thought of this first.}

Despite everyone completing the test in their personally distinctive manner, only one report was copied and distributed to the entire class of subjects. No matter how similar you may think college sophomores are, they’re not so identical as to yield precisely identical personality profiles. But still, a “J. Doe” report was viewed as a perfect fit to most. How does this happen?

Take a read of one of your personality test results. If you’re like most, you’ve completed several of these assessments and probably still have a report or two laying around. When reading your report take note of the following indicators of BS reports:

  1. Conditional Statements: The number of times the words “may,” “might,” “sometimes” show up

Example: “You may be unsure of yourself in a group.”

How “may?” Like, maybe, “90% unsure”, or “maybe completely confident?” The reader typically fills in this blank unwittingly giving the report a “pass.”

  1. Compensatory Observations: The number of times opposing behaviors are presented next to each other

Example: “You have a hard time sharing your feelings in a group. However, with the right group you find it refreshing to get your emotions ‘off your chest.’”

So which are you? A paranoid prepper? Or a chest pounding demonstrator? Either one of these opposing types could fit by this example.

  1. General Statements: The specificity of the descriptions, or lack thereof

Example: “You maintain only a few close friends.”

This statement is pretty much true by definition. It’s certainly up for interpretation such that it is befitting for all.

  1. Differentiating Statements: {fewer is worse} The uniqueness of the descriptions.

Example: “Privately, you feel under qualified for the things others consider you to be expert at.”

The lack of differentiating statements is not exactly the same as making general statements. A specific statement may not be differentiating. The above example is specific, but not distinctive as a fairly large percentage of people do feel under qualified for even their profession.

The point is, anyone can be right when they:

  1. Speak in couched probabilities,
  2. about “both-or” samples of a given behavior,
  3. in very general terms,
  4. about things that many people experience.

These four “hacks” provide all the latitude needed for ANY report to make you think it has “nailed you.”

Beyond these tactics, many give too much credit to the personality test. Frequently reports are simply feeding you back EXACTLY what you put in via your responses. For example, the item, “I like to organize things” may show up in a report as, “You like to organize things.” There were probably more than a hundred items on the test – you probably don’t remember every response you made for every item.

Another way folks give too much credit to the personality test is by holding the belief that the instrument should be right. Beyond your general position on the validity of personality tests, publishers have various tactics to make the test report more "scientific."

  1. Lots of statistics
  2. Lots of figures
  3. Distinguished endorsers
  4. Techno-babble

None of these things may have anything to do with the actual validity of the test. But research shows these things enhance people’s opinion of its validity.

What’s a good report look like?

  1. Good reports take a point of view. They provide specific summaries of behavioral style that really are uniquely you. If you gave the report to a friend and told them this was their report, they’d honestly say that it doesn’t accurately depict them – even if the two of you are inseparable. Fit is determined by both accommodation and exclusion. A good report speaks to you and no one else.
  2. Better reports don’t provide any narrative at all. They simply provide normative scores on the various dimensions (i.e., characteristic behaviors) covered by the test. This type of report allows an expert to interpret the full spectrum of dimensions in the broader context. Good interpreters know what to look for in terms of how the dimensions interact with each other and can further specify the evaluation with just a bit of extra information on the respondent. This does not mean that they already know the subject. It may be as little as knowing why or when the person completed the assessment.
  3. Great reports present just the facts. The report is a fairly straightforward summary of your responses, organized by dimension (trait) and compared to a group of others’ responses/scores. Better still, great reports provide more than one score per dimension, or the average. They also give some indication of the variations in responses by dimension. This allows the interpreter to know just how confident a given score is. No variance = high confidence. Wide variance = low confidence.

So, what does your report really say about you? Depending on the factors I’ve outlined – it may say nothing at all (or worse).

It really helps to know some of this stuff.

With “big data” come big risks

Cartoon showing people considering crossing the valley of big data

Prebabble: Sound research is backed by the scientific method; it’s measurable, repeatable and reasonable consistent with theory-based hypotheses. Data analysis is a component of scientific research but is not scientific by itself. This article provides examples of how research or summary conclusions can be misunderstood by fault of either the reviewer or the researcher - especially when big data are involved. It is not specific to psychological research, nor is it a comprehensive review of faulty analysis or big data.

When I was a grad student, (and dinosaurs trod the earth) four terminals connected to a mainframe computer were the only computational resources available to about 20 psychology grad students. “Terminal time,” (beyond the sentence that was graduate school) was as precious and competitively sought after as a shaded parking spot in the summer. (I do write from the “Sunshine State” of Florida)

Even more coveted than time at one of the terminals, data from non-academic sources were incredibly desirable and much harder to come by. To gain access to good organization data was the “holy grail” of industrial organizational psychology dissertations. Whenever data were made available, one was not about to look this gift horse in the mouth without making every effort to find meaningful research within those data. Desperate, but crafty grad students could wrench amazing research from rusty data.

But some data are rusted beyond repair.

One of my cell-, I mean class-, mates came into the possession of a very large organizational database. Ordinarily the envy of those of us without data, such was not the case here. It was well known that this individual’s data, though big, were hollow; a whole lot of “zeroes.” To my surprise and concern, this individual seemed to be merrily “making a go of it” with their impotent data. Once convinced that they were absolutely going to follow through with a degree-eligible study (that no one “understood”), sarcasm got the best of me, “Gee, Jeff (identity, disguised), you’ve been at it with those data for some time. Are any hypotheses beginning to shake out of your analyses?”

“Working over” data in hope of finding a reasonable hypothesis is a breach of proper research and clearly unethical whether one knows it or not. But it happens – more today than ever before.

"Big data" has become the Sirens’ song, luring unwitting, (like my grad school colleague) or unscrupulous, prospectors in search of something – anything - statistically significant. But that’s not the way science works. That’s not how knowledge is advanced. That’s just “rack-n-hack” pool where nobody “calls their shots.”

It isn’t prediction if it’s already happened.

The statistical significance (or probability) of any prediction in relation to a given (already known) outcome is always perfect (hence, a “foregone” conclusion). This is also the source of many a superstition. Suppose you win the lottery by betting on your boyfriend’s prison number. To credit your boyfriend’s “prison name” for your winnings would be a mistake (and not just because he may claim the booty). Neither his number nor your choice of it had any influence in determining the outcome – even-though you did win. But if we didn’t care about “calling our shot’s” we’d argue for the impossibly small odds of your winning ticket as determined by your clever means of its choice.

This error of backward reasoning is also known by the Latin phrase, post hoc, ergo propter hoc, or, “after this, therefore because of this.” It’s not veridical to predict a cause from its effect. Unfortunately, the logic may be obvious, but the practice isn’t.

Sophisticated statistical methods can confuse even well-intended researchers who must decide which end of the line to put an arrow on. In addition, the temptation to “rewind the analysis” by running a confirmatory statistical model (i.e., “calling my shot” analysis) AFTER a convenient exploratory finding (i.e., “rack-n-hack” luck) can be irresistible when one’s career is at stake as is frequently the case in the brutal academic world of “publish or perish.” But doing this is more than unprofessional, it’s cheating and blatantly unethical. (Don’t do this.)

Never before has the possibility of bad research making news been so great. Massive datasets are flung about like socks in a locker room. Sophisticated analyses that once required an actual understanding of the math in order to do the programming can now be done as easily as talking to a wish-granting hockey puck named “Alexa.” (“What statistical assumptions?”) Finally, the ease of publishing shoddy “research” results to millions of readers is as easy as snapping a picture of your cat.

All of the aforementioned faux-paus (or worse) concern data “on the table.” The most dubious risk when drawing conclusions from statistical analyses – no matter how ‘big’ the data are – is posed by the data that AREN’T on the table.

A study may legitimately find a statistically significant effect on children’s grades based on time spent watching TV vs playing outdoors. The study may conclude, “When it comes to academic performance, children that play outside significantly outperform those that watch TV.” While this is a true conclusion, the causality of the finding is uncertain.

To further complicate things, cognitive biases work their way into the hornet’s nest of correlation vs causation. In an effort to simplify the burden on our overworked brains, correlation and causation tend to get thrown together in our “cognitive laundry bin.” Put bluntly, correlation is causation.

Although it’s easy to mentally “jump track” from correlation to causation, the opposite move, i.e., from causation to correlation, is not so naturally risky.

Cigarette makers were “Kool” (can I get in trouble for this?) with labeling that claimed an ‘association’ between smoking and a litany of health problems. They were, not-so-Kool with terminology using the word “causes.”

Causal statements trigger a more substantial and lasting mental impression than statements of association. “A causes B” is declarative and signals “finality,” whereas “A is associated with B” is descriptive and signals “probability.” Depending on how a statement of association is positioned, it can very easily evoke an interpretation of causation.

Sometimes obfuscation is the author’s goal, other times it’s an accident or merely coincidental. Both are misleading (at best) when our eyes for big data are bigger than our stomachs for solid research.

This simple hack* will reduce stress and improve health

Smiley face to reduce your stress

Most {known} psychological research confirms what people already know. Yep. Most psychological research could receive the “No-duh”  vs. the “Nobel” award. Beyond the obvious, others are obtuse. Good luck with their titles, less the method (that consumes most of the article. But sometimes something else happens. Here, I share a study, well done AND revealing; useful for everyday application. This research yields a simple exercise that, if done, WILL reduce stress and improve your health.

I’ve offered tips to manage mood and to reduce stress before: 3 (easy) office tips to enhance your influence, 3 Surprising Motivation Killers and a couple more. But I must confess that these “tips” are mostly the result of personal experience or general knowledge acquired from multiple sources.

This is different. Or as Dorothy so astutely mentions to Toto in The Wizard of Oz, “… we’re not in Kansas anymore.” (Scariest movie I’ve ever seen…)

Although most research reveals the obvious, what’s surprising is what we do (or don’t do) with this obvious information. Just to check me, I bet you can’t think of three things off the top of your head that would make you or someone else a better person.

You did, didn’t you? (smirk)

No kidding: Why haven’t you done them? If you have, why aren’t you still doing them?

You’re probably wondering, “why is Chris shooting himself in the foot?” It kinda sounds like he’s “giving up” his own profession; “psychological research is unsurprising and insignificant.”

Not quite.

One doesn’t fold with a straight flush, and I wouldn’t with a pair of aces (or would I?). I’ve come too far (and learned too much) to give in now.

Most of you will see through my thinly veiled attempt to entice and titillate as an effort to stir up your emotions. (Not sorry)

Beyond the sarcasm, pointing this out to you is making you even more emotional, even a bit demeaned. (Still, not sorry)

There’s an old saying in psychology, “All’s fair that changes behavior the way we want.” (Well, that’s what it should say.)

No. I’m no martyr. Not at all. I’m “the Fool.”

Here, I re-present one of many findings from I-O psychology that, if applied, would help so many. But it’s buried in an academic journal that few will notice. (I won’t mention it’s not even a journal of psychology, but that’s another story.)

Per Issac Newton, … “a body at rest remains at rest unless acted upon by a force.”

Transferring to psychology, human-kind is a pretty big, “body.” Consider this, “the force.”

What follows is solid I-O psychology research with implications that can really make a difference.

Now that I hope to have gained your attention, here’s the simple activity that will make you happier and healthier:

At the end of every day, write down three (3) good things that happened and why they did.

That’s it. Easy as Pi. (What does that mean, anyway?)

Really?

Yes, that’s it. Record and reflect on three good things that happened. Your spirits will lift and your blood pressure will drop. You can reduce stress. Measure it.

Bono, Glomb, Shen, Kim and Koch. 2013. “Building Positive Resources: Effects of Positive Events and Positive Reflection on Work Stress and Health.” Academy of Management Journal, 56: 1601-1627.

Don’t get me started on why this isn’t published in a journal known for PSYCHOLOGY!

Just get on with it. Prove me wrong.

  • Yes. I am cool because I used the word “hack” vs. “tactic.”

How about a little science with that intuition?

Psychology and intuition

We’re all practicing psychologists — aren’t we? With our uncanny insight and intuition we’re able to ‘read’ another person in a mere 10 seconds. (This is, in fact, what research reveals about employment interviews). We know ourselves, and we know others. As a matter of fact, it’s intuitive — so simple we can do it with almost no thought. Therefore: We never make mistakes when assessing ourselves or others.

But everyone else does. Right? Consider the bias at work here (Hint: Fundamental attribution error).

Intuition is NOT the same as insight.

Why I’m doing this

I’ve started this blog to share insights from psychological research and my own experience applying psychology in the workplace to let you in on some of the most predictable truths you can use to understand and change yourself and others. This is about understanding intuition.

I don’t intend this to be an academic journal, but will cite research or share personal experience lending support to my posts.

I hope you find them helpful.