“Everybody is a genius. but if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid” – – Albert Einstein

All the notes were taken directly from the source mentioned! 

– – –

The Misunderstanding of Decision Making

I ask group members to come to our first meeting with a brief description of their best and worst decisions of the previous year. I have yet to come across someone who doesn’t identify their best and worst results rather than their best and worst decisions.

Hindsight bias is the tendency, after an outcome is known, to see the outcome as having been inevitable.

When we work backward from results to figure out why those things happened, we are susceptible to a variety of cognitive traps, like assuming causation when there is only a correlation, or cherry-picking data to confirm the narrative we prefer.

No sober person thinks getting home safely after driving drunk reflects a good decision or good driving ability.

Science writer, historian, and skeptic Michael Shermer, in The Believing Brain, explains why we have historically (and prehistorically) looked for connections even if they were doubtful or false. Incorrectly interpreting rustling from the wind as an oncoming lion is called a type I error, a false positive. The consequences of such an error were much less grave than those of a type II error, a false negative.

Two Thinking Systems

I particularly like the descriptive labels “reflexive mind” and “deliberative mind” favored by psychologist Gary Marcus. He wrote, “Our thinking can be divided into two streams, one that is fast, automatic, and largely unconscious, and another that is slow, deliberate, and judicious.”

Automatic processing originates in the evolutionarily older parts of the brain, including the cerebellum, basal ganglia, and amygdala. Our deliberative mind operates out of the prefrontal cortex.

Suggested Readings

  • Dan Gilbert’s Stumbling on Happiness
  • Gary Marcus’s Kluge: The Haphazard Evolution of the Human Mind
  • Dan Kahneman’s Thinking, Fast and Slow

Father of Game Theory

John von Neumann

This is what he did in the last ten years of his life: played a key role on the Manhattan Project, pioneered the physics behind the hydrogen bomb, developed the first computers, figured out the optimal way to route bombers and choose targets at the end of World War II, and created the concept of mutually assured destruction (MAD), the governing geopolitical principle of survival throughout the Cold War. Even after being diagnosed with cancer in 1955 at the age of fifty-two, he served in the first civilian agency overseeing atomic research and development, attending meetings, though in great pain, in a wheelchair for as long as he was physically able.

After finishing his day job on the Manhattan Project, he collaborated with Oskar Morgenstern to publish Theory of Games and Economic Behavior in 1944. William Poundstone, author of a widely read book on game theory, Prisoner’s Dilemma, called it “one of the most influential and least-read books of the twentieth century.”

Game theory was succinctly defined by economist Roger Myerson (one of the game-theory Nobel laureates) as “the study of mathematical models of conflict and cooperation between intelligent rational decision-makers.”

“Wanna Bet?”

Merriam-Webster’s Online Dictionary defines “bet” as “a choice made by thinking about what will probably happen,”

Not placing a bet on something is, itself, a bet. Choosing to go to the movies means that we are choosing to not do all the other things with our time that we might do during that two hours. We are betting against all the future versions of ourselves that we are not choosing.

The world is structured to give us lots of opportunities to feel bad about being wrong if we want to measure ourselves by outcomes. Don’t fall for it! Second, being wrong hurts us more than being right feels good.

Part of the skill in life comes from learning to be a better belief calibrator, using experience and information to more objectively update our beliefs to more accurately represent the world. The more accurate our beliefs, the better the foundation of the bets we make.

The system we already had was:

(1) experience it,

(2) believe it to be true, and

(3) maybe, and rarely, question it later.

Truth-seeking, the desire to know the truth regardless of whether the truth aligns with the beliefs we currently hold, is not naturally supported by the way we process information. Instead of altering our beliefs to fit new information, we do the opposite, altering our interpretation of that information to fit our beliefs.

This irrational, circular information-processing pattern is called motivated reasoning. The way we process new information is driven by the beliefs we hold, strengthening them. Those strengthened beliefs then drive how we process further information, and so on.

Although the Internet and the breadth of multimedia news outlets provide us with limitless access to diverse opinions, they also give us an unprecedented opportunity to descend into a bubble, getting our information from sources we know will share our view of the world. Author Eli Pariser developed the term “filter bubble” in his 2011 book of the same name to describe the process of how companies like Google and Facebook use algorithms to keep pushing us in the directions we’re already headed.

If we think of beliefs as only 100% right or 100% wrong, any information that disagrees with us is an assault on our self-narrative. When our self-image is at stake, we treat our fielding decisions as 100% or 0%: right versus wrong, skill versus luck, our responsibility versus outside our control. There are no shades of grey.

The smarter you are, the better you are at constructing a narrative that supports your beliefs, rationalizing and framing the data to fit your argument or point of view. Also, it turns out the better you are with numbers, the better you are at spinning those numbers to conform to and support your beliefs.

  • How do I know this?
  • Where did I get this information?
  • Who did I get it from?
  • What is the quality of my sources?
  • How much do I trust them?
  • How up to date is my information?
  • How much information do I have that is relevant to the belief?
  • What other things like this have I been confident about that turned out not to be true?
  • What are the other plausible alternatives?
  • What do I know about the person challenging my belief?
  • What is their view of how credible my opinion is?
  • What do they know that I don’t know?
  • What is their level of expertise?
  • What am I missing?

The more we recognize that we are betting on our beliefs (with our happiness, attention, health, money, time, or some other limited resource), the more we are likely to temper our statements, getting closer to the truth as we acknowledge the risk inherent in what we believe. We can train ourselves to view the world through the lens of “Wanna bet?” Once we start doing that, we are more likely to recognize that there is always a degree of uncertainty, that we are generally less sure than we thought we were, that practically nothing is black and white, 0% or 100%. And that’s a pretty good philosophy for living.

Samuel Arbesman’s The Half-Life of Facts is a great read about how practically every fact we’ve ever known has been subject to revision or reversal.

Declaring our uncertainty in our beliefs to others makes us more credible communicators.

By communicating our own uncertainty when sharing beliefs with others, we are inviting the people in our lives to act like scientists with us.

As novelist and philosopher Aldous Huxley recognized, “Experience is not what happens to a man; it is what a man does with what happens to him.”

We take credit for the good stuff and blame the bad stuff on luck so it won’t be our fault. The result is that we don’t learn from experience well. “Self-serving bias” is the term for this pattern of fielding outcomes.

Ideally, our happiness would depend on how things turn out for us regardless of how things turn out for anyone else. Yet, on a fundamental level, fielding someone’s bad outcome as their fault feels good to us. On a fundamental level, fielding someone’s good outcome as luck helps our narrative along.

Stanford law professor and social psychologist Robert MacCoun studied accounts of auto accidents and found that in 75% of accounts, the victims blamed someone else for their injuries. In multiple- vehicle accidents, 91% of drivers blamed someone else. Most remarkably, MacCoun found that in single-vehicle accidents, 37% of drivers still found a way to pin the blame on someone else.

Positive Psychology

Sonja Lyubomirsky, a psychology professor at the University of California, “the general conclusion from almost a century of research on the determinants of well-being is that objective circumstances, demographic variables, and life events are correlated with happiness less strongly than intuition and everyday experience tell us they ought to be. By several estimates, all of these variables put together account for no more than 8% to 15% of the variance in happiness.” What accounts for most of the variance in happiness is how we’re doing comparatively.

When you ask people if they would rather earn $70,000 in 1900 or $70,000 now, a significant number choose 1900. True, the average yearly income in 1900 was about $450. So we’d be doing phenomenally well compared to our peers from 1900. But no amount of money in 1900 could buy Novocain or antibiotics or a refrigerator or airconditioning or a powerful computer we could hold in one hand.

Suggested Readings

  • Read Lyubomirsky’s work on the subject,
  • Daniel Gilbert’s Stumbling on Happiness,
  • Jonathan Haidt’s The Happiness Hypothesis,

Habits

Habits operate in a neurological loop consisting of three parts: the cue, the routine, and the reward. A habit could involve eating cookies: the cue might be hunger, the routine going to the pantry and grabbing a cookie, and the reward a sugar high.

Charles Duhigg, in The Power of Habit, offers the golden rule of habit change—that the best way to deal with a habit is to respect the habit loop: “To change a habit, you must keep the old cue, and deliver the old reward, but insert a new routine.” When we have a good outcome, it cues the routine of crediting the result to our awesome decision-making, delivering the reward of a positive update to our self-narrative.

At least as far back as Pavlov, behavioral researchers* have recognized the power of substitution in physiological loops. In his famous experiments, his colleague noticed that dogs salivated when they were about to be fed. Because they associated a particular technician with food, the presence of the technician triggered the dogs’ salivation response. Pavlov discovered the dogs could learn to associate just about any stimulus with food, including his famous bell, triggering the salivary response.

Perspective taking gets us closer to the truth because that truth generally lies in the middle of the way we field outcomes for ourselves and the way we field them for others. By taking someone else’s perspective, we are more likely to land in that middle ground.

You don’t have to be on the defensive side of every negative outcome because you can recognize, in addition to things you can improve, things you did well and things outside your control. You realize that not knowing is okay.

Think of it like a ship sailing from New York to London. If the ship’s navigator introduces a one-degree navigation error, it would start off as barely noticeable. Unchecked, however, the ship would veer farther and farther off course and would miss London by miles, as that one-degree miscalculation compounds mile over mile.

Exploratory Thought & Truthseeking Pod

If we can find a few people to choose to form a truth-seeking pod with us and help us do the hard work connected with it, it will move the needle—just a little bit, but with improvements that accumulate and compound over time. We will be more successful in fighting bias, seeing the world more objectively, and, as a result, we will make better decisions. Doing it on our own is just harder.

Lerner and Tetlock offer insight into what should be included in the group agreement to avoid confirmatory thought and promote exploratory thought.

“Complex and open-minded thought is most likely to be activated when decision makers learn prior to forming any opinions that they will be accountable to an audience (a) whose views are unknown, (b) who is interested in accuracy, (c) who is reasonably well-informed, and (d) who has a legitimate reason for inquiring into the reasons behind participants’ judgments/choices.” Their 2002 paper was one of several they coauthored supporting the conclusion that groups can improve the thinking of individual decision-makers when the individuals are accountable to a group whose interest is in accuracy. In addition to accountability and an interest in accuracy, the charter should also encourage and celebrate a diversity of perspectives to challenge biased thinking by individual members.

Haidt, in his book The Righteous Mind: Why Good People Are Divided by Politics and Religion, built on Tetlock’s work, connecting it with the need for diversity.

“If you put individuals together in the right way, such that some individuals can use their reasoning powers to disconfirm the claims of others, and all individuals feel some common bond or shared fate that allows them to interact civilly, you can create a group that ends up producing good reasoning as an emergent property of the social system. This is why it’s so important to have intellectual and ideological diversity within any group or institution whose goal is to find truth.”

Exploratory thought becomes a new habit of mind, the new routine, and one that is self-reinforced.

A focus on accuracy (over confirmation), which includes rewarding truthseeking, objectivity, and open-mindedness within the group; Accountability, for which members have advance notice; and Openness to a diversity of ideas.

Be the best credit-giver, the best mistake-admitter, and the best finder-of-mistakes-in-good-outcomes.

A group with diverse viewpoints can help us by sharing the work suggested in the previous two chapters to combat motivated reasoning about beliefs and biased outcome fielding.

  • Why might my belief not be true?
  • What other evidence might be out there bearing on my belief?
  • Are there similar areas I can look toward to gauge whether similar beliefs to mine are true?
  • What sources of information could I have missed or minimized on the way to reaching my belief?
  • What are the reasons someone else could have a different belief, what’s their support, and why might they be right instead of me?
  • What other perspectives are there as to why things turned out the way they did?

We must be vigilant about this drift in our groups and be prepared to fight it. Whether it is the forming of a group of friends or a pod at work—or hiring for diversity of viewpoint and tolerance for dissent when you are able to guide an enterprise’s culture toward accuracy—we should guard against gravitating toward clones of ourselves.

CUDOS

Our brains have built-in conflicts of interest, interpreting the world around us to confirm our beliefs, to avoid having to admit ignorance or error, to take credit for good results following our decisions, to find reasons bad results following our decisions were due to factors outside our control, to compare well with our peers, and to live in a world where the way things turn out makes sense. We are not naturally disinterested. We don’t process information independent of the way we wish the world to be.

Communism (data belong to the group),

Universalism (apply uniform standards to claims and evidence, regardless of where they came from), Disinterestedness (vigilance against potential conflicts that can influence the group’s evaluation), and Organized Skepticism (discussion among the group to encourage engagement and dissent).

The Mertonian norm of universalism is the converse. “Truth-claims, whatever their source, are to be subjected to preestablished impersonal criteria.” It means acceptance or rejection of an idea must not “depend on the personal or social attributes of their protagonist.” “Don’t shoot the message,” for some reason, hasn’t gotten the same historical or literary attention, but it addresses an equally important decision-making issue: don’t disparage or ignore an idea just because you don’t like who or where it came from. Another way to disentangle the message from the messenger is to imagine the message coming from a source we value much more or much less.

Skepticism is about approaching the world by asking why things might not be true rather than why they are true. When seeking advice, we can ask specific questions to encourage the other person to figure out reasons why we might be wrong.

Increase Agreeableness

If we want to engage someone with whom we have some disagreement (inside or outside our group), they will be more open and less defensive if we start with those areas of agreement, which there surely will be.

If someone expresses a belief or prediction that doesn’t sound well calibrated and we have relevant information, try to say and, as in, “I agree with you that …, AND . . .”.

Rather than rehashing what has already happened, try instead to engage about what the person might do so that things will turn out better going forward.

Recruiting past-us and future-us

Temporal discounting: We are willing to take an irrationally large discount to get a reward now instead of waiting for a bigger reward later.

Imagine how future-us is likely to feel about the decision or by imagining how we might feel about the decision today if past-us had made it. Every 10-10-10 process starts with a question. . . . What are the consequences of each of my options in ten minutes? In ten months? In ten years?”

Recruiting past-us and future-us in this way activates the neural pathways that engage the prefrontal cortex, inhibiting emotional mind and keeping events in more rational perspective. This discourages us from magnifying the present moment, blowing it out of proportion and overreacting to it.

What has happened in the recent past drives our emotional response much more than how we are doing overall. That’s how we can win $100 and be sad, and lose $100 and be happy. The zoom lens doesn’t just magnify, it distorts.

Binding Devices

Ulysses contract: One of the simplest examples of this kind of contract is using a ride-sharing service when you go to a bar. A past version of you, who anticipated that you might decide irrationally about whether you are okay to drive, has bound your hands by taking the car keys out of them. The precommitments, however, provide a stop-and-think moment before acting, triggering the potential for deliberative thought.

We all know about the concept of a swear jar: if someone swears, they put a dollar in the jar. The idea behind it is that it will make people mindful about swearing and reduce how much they do it.

“Swear Jar” Penalty for:

Signs of the illusion of certainty: “I know,” “I’m sure,” “I knew it,” “It always happens this way,” “I’m certain of it,” “you’re 100% wrong,” “You have no idea what you’re talking about,” “There’s no way that’s true,” “0%” or “100%”

Absolutes: “best” or “worst” and “always” or “never.”

Overconfidence: similar terms to the illusion of certainty.

Irrational outcome fielding: “I can’t believe how unlucky I got,” or the reverse, if we have some default phrase for credit taking, like “I’m at the absolute top of my game” or “I planned it perfectly.” This includes conclusions of luck, skill, blame, or credit. “They totally had that coming,” “They brought it on themselves,” and “Why do they always get so lucky?”

Any kind of moaning or complaining about bad luck just to off-load it, with no real point to the story other than to get sympathy. (An exception would be when we’re in a truth-seeking group and we make explicit that we’re taking a momentary break to vent)

Generalized characterizations of people meant to dismiss their ideas: insulting, pejorative characterizations of others, like “idiot” or, in poker, “donkey.” Or any phrase that starts by characterizing someone as “another typical ________.” 

Shooting the message because we don’t think much of the messenger. Accepting a message because of the messenger or praising a source immediately after finding out it confirms your thinking.

Sweeping term or intellectual assessment of the person delivering the idea, such as “gun nut,” “bleeding heart,” “East Coast,” “Bible belter,” “California values”—political or social issues.

Signals that we have zoomed in on a moment “worst day ever,” “the day from hell.”

Expressions that explicitly signal motivated reasoning, accepting or rejecting information without much evidence, like “conventional wisdom” or “if you ask anybody” or “Can you prove that it’s not true?” Similarly, look for expressions that you’re participating in an echo chamber, like “everyone agrees with me.” The word “wrong,” which deserves its own swear jar.  “Wrong” is a conclusion, not a rationale. And it’s not a particularly accurate conclusion since, as we know, nearly nothing is 100% or 0%.

Any words or thoughts denying the existence of uncertainty should be a signal that we are heading toward a poorly calibrated decision.

Lack of self-compassion: if we’re going to be self-critical, the focus should be on the lesson and how to calibrate future decisions. “I have the worst judgment on relationships” or “I should have known” or “How could I be so stupid?”

Signals we’re being overly generous editors when we share a story. Especially in our truth-seeking group, are we straying from sharing the facts to emphasize our version? Even outside our group, unless we’re sharing a story purely for entertainment value, are we assuring that our listener will agree with us? In general, are we violating the Mertonian norm of communism? Infecting our listeners with a conflict of interest, including our own conclusion or belief when asking for advice or informing the listener of the outcome before getting their input.

Terms that discourage engagement of others and their opinions, including expressions of certainty and also initial phrasing inconsistent with that great lesson from improvisation—“yes, and . . .” That includes getting opinions or information from others and starting with “no” or “but . . .”

Reconnaissance on the Future

For us to make better decisions, we need to perform reconnaissance on the future. If a decision is a bet on a particular future based on our beliefs, then before we place a bet we should consider in detail what those possible futures might look like.

After identifying as many of the possible outcomes as we can, we want to make our best guess at the probability of each of those futures occurring. It’s about acknowledging that we’re already making a prediction about the future every time we make a decision, so we’re better off if we make that explicit. We are already guessing that the decision we execute will result in the highest likelihood of a good outcome given the options we have available to us.

A pre-mortem is an investigation into something awful, but before it happens. It starts with working backward from an unfavorable future, or failure to achieve a goal, so competing for favor, or feeling good about contributing to the process, is about coming up with the most creative, relevant, and actionable reasons for why things didn’t work out.

***

Thanks for reading. Did you like the content you just read? You can help me spread these ideas by sharing this blog post through your social media channels or sending it as a direct message to your friends.