Cognitive Bias Series #3: What Israeli Air Force Pilots Can Teach Us About Investing

By TradeSmith Research Team

Listen to this post

Even the best investors in the world can benefit from TradeStops. We know this because we’ve extensively tested the results.

We track the stock portfolios of multiple billionaires — most of them investment legends — and the data has shown that, generally and over time, our software could have substantially improved their already excellent performance.

That leads to an interesting question. Why is this the case?

How is it that even the smartest investors on the planet can benefit from the use of algorithmic rules (as applied through easy to use investment software)?

It has to do with how the human brain works.

For example, we know how hard it is to predict the future. And in many cases predicting the future is impossible — there are just too many variables.

But it’s not just predicting the future that’s hard.

Interpreting the past can be just as challenging!

The problem, once again, is there are too many variables for the brain to grasp … plus an all-too-common tendency to draw the wrong conclusions from what we think we see.

Even the smartest investors in the world are prone to this. That’s partly why TradeStops has a track record of improving the billionaires’ portfolio results.

Our algorithms are based on large data sets going back decades. That means the TradeStops software can “see” things the human brain can’t — because the human mind was never meant to process millions of data points over a multi-decade time span.

Let’s look closer at why learning from the past can be just as hard as predicting the future.

Ed Catmull is the president and co-founder of Pixar, the computer animation film studio (now owned by Disney) responsible for huge hits like Toy Story, Finding Nemo, and Monsters Inc.

In his book Creativity, Inc., Catmull talks about why the phrase “Hindsight is 20/20” is false:

“‘Hindsight is 20/20’… when we hear it, we normally just nod in agreement. ‘Yes … of course.’ Accepting that we can look back on what happened and see it with total clarity, learn from it and draw the right conclusions. The problem is, the phrase is dead wrong. Hindsight is not 20/20. Not even close.

Our view of the past, in fact, is hardly clearer than our view of the future. While we know more about a past event than a future one, our understanding of the factors that shaped it is severely limited. Not only that because we think we see what happened clearly, hindsight being 20/20 and all, we often aren’t open to knowing more.”

This goes back to the “too many variables” problem.

It has been estimated that the human brain takes in less than five percent of the available information from its surrounding environment at any given time.

And that is just the stuff going on around you … let alone the oceans of information in the stock market!

Catmull’s point was that analyzing a situation can be incredibly tough, even with all the information that is available. There’s just too much data to process.

At the same time, our “mental models” for understanding the past are often no better than our models for predicting the future … and sometimes they are worse.

As Catmull points out, there is a danger in assuming (wrongly) that we understand what happened when we actually don’t … and then being closed off to other insights or possibilities.

And it gets more challenging, because “too many variables” isn’t the only problem. The alien aspects of probability and statistics create problems too.

It’s possible to have all the data and still draw the wrong conclusion — again because of the way the human brain works. Just as the brain was not designed to interpret large data sets, it was not built to understand statistics.

Daniel Kahneman, the Nobel prize winner who pioneered the field of behavioral economics with his partner Amos Tversky in the 1970s, tells a great story to illustrate this point. It has to do with Israeli Air Force pilots.

Kahneman was once tasked with teaching psychology to flight instructors in the Israeli Air Force. The goal was to improve the quality of flight instructor training, and thus improve pilot performance.

Kahneman started off by telling the flight instructors about one of his most important findings: The fact that skills training works better with rewards than it does with punishment.

In other words, if you are trying to teach a skill, there are better results from positive reinforcement (e.g. praise or encouragement) than negative reinforcement (e.g. criticism or yelling). This has been shown via countless studies on both humans and animals.

After enthusiastically explaining why praise beats criticism for getting results, Kahneman immediately got some strong pushback. One of the most experienced flight instructors in the group thought Kahneman was dead wrong!

Via Kahneman’s book, Thinking Fast and Slow, this is what the seasoned flight instructor said:

“On many occasions I have praised flight cadets for clean execution of some aerobatic maneuver. The next time they try the same maneuver they usually do worse. On the other hand, I have often screamed into a cadet’s earphone for bad execution, and in general he does better on his next try. So please don’t tell us that reward works and punishment does not, because the opposite is the case.”

Kahneman called this “one of the most satisfying eureka experiences of my career.”

It was a great teachable moment, because the flight instructor’s response was a classic example of cognitive bias (rooted in how the brain works).

As it turns out, the instructor was both right and wrong at the same time.

  • He was right that flight cadets tend to do worse after a good performance.
  • He was also right that flight cadets tend to do better after a bad performance.
  • But he completely missed the reason as to why that was true.

The reason cadet performance got worse after a good outing … or better after a bad one … had nothing to do with whether the cadets got praise or criticism. Praise versus yelling was not the dominant factor.

Instead, it was a basic statistical concept known as “regression to the mean.”

“Regression to the mean” describes how performance tends to return to the average over time.

This means that, if you have an exceptionally good performance — far better than your personal average — you will probably do worse next time (closer to your average) simply by random chance.

It also means that, if you have an exceptionally bad performance — far worse than your average — your next performance will probably be better … again simply by random chance.

So, the flight instructor was observing a real phenomenon: The fact that pilots tend to do worse after a good performance and better after a bad performance (in both cases a regression to the mean).

But the instructor completely missed the fact that this pattern is grounded in normal statistical probabilities … and so he got into his head that praise makes performance worse and yelling makes it better (which is completely backward)!

Kahneman wanted to show this proud flight instructor why his thinking was wrong, but he knew a lecture on probability wouldn’t cut it. So he devised a quick experiment instead.

Kahneman drew a target on the floor with a piece of chalk. Then he asked all the flight instructors to turn their backs and throw coins at the target without looking.

By analyzing the coin toss results on a blackboard, Kahneman was able to show that the instructors who did worse than average on the first toss mostly did better on the second try, simply by chance … and for the instructors who did better than average on the first try, the results were reversed.

After telling the story in his book, Kahneman lamented that this kind of thing happens all the time (not just in the Israeli Air Force).

Human brains aren’t wired to interpret large data sets, and they aren’t naturally wired for statistics or probability either.

So, we tend to go with what we “see,” which is really what our brains are telling us … which can be based on cognitive bias assumptions that lead to conclusions that are wrong.

A deeper lesson here is that almost every investor can benefit from a set of algorithmic rules, existing outside their brain, powered by a large data set, to help with the process of decision-making.

The algorithms don’t have to take over the decision-making process completely … they can simply provide valuable inputs, particularly in areas where the brain needs assistance.

(As an aside, Daniel Kahneman is a big fan of algorithmic decision making, and gives multiple examples in his book, Thinking Fast and Slow, of why a well-designed system can beat human judgement.)

Another lesson here is that we should all stay humble … because we are all human beings.

It doesn’t matter how smart you are or how many advanced degrees you have.

In fact, it’s been shown that the higher your IQ, the better you are at making up narratives after the fact about why a past event went the way that it did!

The biases and flawed perceptions in play here are built into every human brain. Overconfidence and pride can make the cost of these biases worse, so staying humble is also a form of risk management!

All of which helps explain why TradeStops has the potential to help almost any investor … as shown by the back-tested results of the investment legend billionaires we follow.

TradeSmith Research Team

Cognitive Bias Series