Taleb's Black Swan is Required Reading for Decision Making

The Black Swan is a philosophical essay on the role of knowledge and our mistreatment of it, specifically as it relates to our ability (inability) to predict and forecast. Although at times Taleb sounds suspiciously like a man with a hammer, and his writing style is pretentious (get out your dictionary), The Black Swan is required reading for any decision maker in any field.

--Notes--

What is a Black Swan event?

  1. Rare and unpredictable
  2. Super impactful (world changing)
  3. Invites retrospective predictability. We attempt to make sense of the event after the fact by coming up with explanations for why it was bound to happen (narrative)

These are the events that shape the world. The world is not defined by what is ordinary and expected but that which is unexpected.

The way we typically go about trying to understand the world is fundamentally flawed. We study what is normal, ignoring the outliers, when what is normal is irrelevant.

You cannot truly understand anything without considering the extremes:

  • Can you understand health without examining disease?
  • Can you understand what a criminal is capable by examining what he does on an ordinary day?

Almost every event that has had a significant impact on the history of the world was unexpected. Therefore, what we know is irrelevant. It is what we don’t know that matters.

Reduce the number of negative Black Swan events that we expose ourselves to, while preparing ourselves to create opportunities for and take advantage of positive Black Swan events.

One way to do this if by tinkering and experimenting. Trial and error over planning. Expose your process to more randomness and capitalize on positive outcomes and opportunities.

History and the Triplet of Opacity:

“History is opaque. You see what comes out, not the script that produces events, the generator of history. There is a fundamental incompleteness in your grasp of such events, since you do not see what’s inside the box, how the mechanisms work.”

The “triplet of opacity” is Taleb's term for the three flaws the human mind suffers as it relates to history:

  1. The illusion of understanding: we don’t realize what is really going on in the world, and we don’t grasp how truly complex and random it is
  2. Retrospective distortion: we can only examine events after the fact, and this causes us to come up with explanations that discount the event’s rarity or conceivability
  3. Overvaluation of facts and authority figures.

How History is Written: in real-time, there are a countless number of data in the air, but after the fact, we tend to only remember the ones that provide explanations of the events that ended up unfolding. We remember the ones that stick out to us for some reason through our now altered lens. In this way, much of what is written about history, after the fact, invokes the narrative fallacy.

Danger of Definitive Categorizing

“Categorizing is necessary for humans, but it becomes pathological when the category is seen as definitive, preventing people from considering the fuzziness of boundaries, let alone revising their categories.”

Why is categorizing dangerous? Because it reduces complexity and simplifies the world in theory while ignoring complexity and uncertainty in reality, which according to Taleb acts as a generator for Black Swan events.

“If you want to see what I mean by the arbitrariness of categories, check the situation of polarized politics. The next time a Martian visits earth, try to explain to him why those who favor allowing the elimination of a fetus in the mother’s womb also oppose capital punishment. Or try to explain to him why those who accept abortion are supposed to be favorable to high taxation but against a strong military. Why do those who prefer sexual freedom need to be against individual economic liberty?”

This is also one of the primary reasons why I don’t answer the question “Are you a democrat or republican.”

Scalable vs Non-Scalable Professions

“Some professions, such as dentists, consultants, or massage professionals, cannot be scaled: there is a cap on the number of patients or clients you can see in a given period of time. (…) Other professions allow you to add zeroes to your output (and your income), if you do well, at little or no extra effort.”

Good general advice is to avoid scalable professions, and opt for nonscalable ones. You are not guaranteed to succeed in scalable professions, and these are where winner-take-all effects can produce inordinate inequalities.

"The inequity [from scalable activities] comes when someone perceived as being marginally better gets the whole pie.”

Mediocristan vs Extremistan

In Mediocristan, data tends to follow the bell-curve in which “no single instance will significantly change the aggregate or the total.” Physical properties like height, weight, calorie consumption, etc. belong to Mediocristan. Line a thousand people up (or a million), and the heaviest of them all will not represent a noticeable piece of the whole.

In Extremistan, “inequalities are such that one single observation can disproportionately impact the aggregate, or the total.” Almost all social properties belong here. Take those same thousand people and compare something like their net worth, and the largest deviation is likely to represent a disproportionate share of the total. Line a thousand authors up, and the most successful will likely represent 99% of total book sales.

“Knowledge” works against us when try to make sense of data from Extremistan with models that only work in Mediocristan (like the bell curve). When we make projections into the future based on past data, we may be putting ourselves at risk for Black Swans (of the negative variety – or by missing opportunities for positive ones).

"Blindness” to Black Swans has certain themes:

  • Confirmation error: generalizing available data
  • Narrative fallacy: creating stories to fit the data in an attempt to make sense of it
  • General ignorance of the existence of Black Swans. (Does anyone else remember being told in economics class to ignore outliers for the sake of the model?)
  • We forget about the silent evidence (that which we do not see) missing from history.
  • We focus on specific uncertainty as opposed to general uncertainty, or the unknown unknown. “Note that after the event you start predicting the possibility of other outliers happening locally, that is, in the process you were just surprised by, but not elsewhere. After the stock market crash of 1987 half of America’s traders braced for another one every October – not taking into account that there was no antecedent for the first one. (…) Mistaking a naive observation of the past as something definitive or representative of the future is the one and only cause of our inability to understand the Black Swan.”

Domain Specificity

It is unnatural and difficult for us to transfer our “reactions, our mode of thinking, our intuitions” etc. across different contexts because “we react to a piece of information not on its logical merit, but on the basis of which framework surrounds it, and how it registers with our social-emotional system.” Points about this:

  • Example: taking the escalator to the gym and then spending 30min on the Stairmaster.
  • When I arrived in Tangier, I had already read that it’s common for locals to take advantage of tourists through “free tours,” etc. Yet I fell for almost all of the things I had just read. Knowledge learned in the classroom is not always so easily applied to the environments we operate in.
  • In Richard Feynman’s Surely You're Joking, Mr. Feynman!, Feynman describes the time he presented a well-known physicist with a riddle in which the answer involved the simple application of a known gravitational constant/law/equation. The physicist who struggled to figure it out was a prominent collaborator of Albert Einstein’s. This phenomenon is described in fascinating detail in many contexts in Feynman’s book.
  • “Knowledge, even when it is exact, does not often lead to appropriate actions because we tend to forget what we know, or forget how to process it properly if we do not pay attention, even when we are experts.”
  • “No evidence of possible outliers” is not the same as “evidence of no possible” outliers.
  • “Almost all terrorists are Muslims” is much different from “almost all Muslims are terrorists” yet consider the current political dialogue in America and parts of Europe.
  • Anyone who points out the fact that a certain successful entrepreneur dropped out of college and then attempts to connect that to the value of dropping out of college [example is mine].
  • We naturally gravitate and look for data that confirms what we already know (or think we know) – Confirmation bias.

We should focus less on knowledge which “confirms” ideas, and more on knowledge which falsifies claims and ideas. We can know more about what we don’t know than what we do know. (One-sided skepticism / negative empiricism / falsification / Karl Popper)

“It is impossible – biologically impossible – to run into a human several hundred miles tall, so our intuitions rule these events out. But the sales of a book or the magnitude of social events do not follow such strictures. It takes a lot more than a thousand days to accept that a writer is ungifted, a market will not crash, a war will not happen, a project is hopeless, a country is ‘our ally,’ a company will not go bust, a brokerage-house security analysis is not a charlatan, or a neighbor will not attack us. In the past, humans could make inferences far more accurately and quickly. (…) The instinct to make inferences rather quickly, and to ‘tunnel’ (i.e., focus on a small number of sources of uncertainty, or causes of known Black Swans) remains rather ingrained in us. This instinct, in a word, is our predicament.”

Further reading:

Narrative Fallacy

“The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship, upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding.”

We fall victim to the narrative fallacy because (this is ironic) information is costly to obtain, store, and manipulate. Much easier to ascribe a succinct and sequential “story” to data than (impossibly) maintaining a vast objective database of all the information.

In our effort to simplify and explain, outliers (unpredictable Black Swans) often get ignored (or explained away retroactively).

For example: prior to an event (say, the outbreak of WWI), there will be a trillion data points flying around. After the event, you may have a hard time recalling any of those points that don’t directly relate to the event itself. In other words, you will suddenly be able to recall information that seems to point to the inevitability of the event (“it was so obvious!”) without realizing that what you remember is an infinitesimal sample of the much broader noise that was happening at the time. If you were to examine the daily journal/diary of a journalist leading up to the outbreak of the war, and compared it to his journal as he later reflects on his (past) experience leading up to that war, you would find more evidence for causes in the historical account.

This has unintended and negative consequences on our ability to predict, because it distorts the nature of uncertainty and causes forecast error and over-confidence in the models we use to predict the future.

Consider:

To see how narrative fallacy is so naturally ingrained in us consider the following two statements:

A. “Joey seemed happily married. He killed his wife.”
B. “Joey seemed happily married. He killed his wife to get her inheritance.”

Which statement is more believable? Obviously the first statement is more likely because it contains more possibilities (an infinite number) including the second statement. Even so, the second statement is more compelling and attractive – indeed, more obvious. And although it’s easy to see the distinction when it is laid out like this, when considering the domain-specificity problem outlined above, it’s not hard to see how making a mistake like this can become common (if not rampant) in the world of decision making. Indeed, have you EVER studied a historical account in school and NOT been given an explanation?

Another example:

A. “A massive flood somewhere in America in which more than a thousand people die.”
B. “An earthquake in California, causing massive flooding, in which more than a thousand people die.”

Again, same thing. But which statement is more likely to sell insurance? Which statement is more likely to be presented to a government bureaucrat responsible for allocating funds for this or the other?

Other examples of narrative fallacy related to decision making:

  • You don’t buy a specific car because a friend of yours with the same car details a recent disaster involving an expensive trip to the mechanic despite consumer reports data about reliability. (Sensational stories trump data)
  • A recent and vivid story (overplayed on 24/7 news channels) details the horrible crash of some airplane over the Indian Ocean and as a result you opt to drive to your aunt’s house in Texas for New Year’s as opposed to taking a commercial plane. The image of a fiery plane crash registers in your mind more vividly than accident statistics that would tell you that driving is more dangerous.
  • You drive a motorcycle. Until a relative of yours dies in a motorcycle accident. As if that fact is more telling than the statistics that already existed.

“The way to avoid the ills of the narrative fallacy is to favor experimentation over storytelling, experience over history, and clinical knowledge over theories.”

Engaging in Black Swan-Dependent Careers

We unfairly equate money and “success” with value. A struggling researcher/artist/writer/musician is looked down upon relative to those who pursue nonscalable professions with steady income. What we ignore is the fact that those struggling researchers are engaged in Black Swan-dependent activities. In other words, the researcher who is studying cancer will not, and cannot, experience a steady progression of impressive results. If she discovers the cure to cancer, it will necessarily be unexpected and unpredictable.

Meanwhile, the stock broker (for example) who has an impressive income but is actually contributing very little (probably negative) value to society is perceived as more respectable. Indeed, he or she is more likely the type we want our daughters (or sons) to marry.

We have an emotional need for linear progress. We would rather experience one huge loss than a string of small losses. We would rather experience years of income than one major windfall (are you really going to be any happier in the long term winning the lottery?). The stock broker who receives steady and progressive income might be happier than the researcher who discovers nothing for three decades, even if she does eventually find the cure to cancer, all else being equal.

In the global and social realities we have created, linear relationships are the exception not the rule.

If we are engaged in Black Swan-dependent activities (like researching the cure to cancer), we can improve our emotional state by surrounding ourselves with peers and communities engaged in the same types activities. We are social animals.

In other words, if you are a struggling musician, and all your friends are rich stock brokers, you may have a hard time.

Silent Evidence

We have a biased towards the evidence we see, and against that which we don’t see.

“The neglect of silent evidence is endemic to the way we study comparative talent, particularly in activities that are plagued with winner-take-all attributes. We may enjoy what we see, but there is no point reading too much into success stories because we do not see the full picture.”

“Say you attribute the success of the nineteenth-century novelist Honoré de Balzac to his superior ‘realism,’ ‘insights’, ‘sensitivity,’ ‘treatment of characters,’ ‘ability to keep the reader riveted,’ and so on. These may be deemed ‘superior’ qualities that lead to superior performance if, and only if, those who lack what we call talent also lack these qualities. But what if there are dozens of comparable literary masterpieces that happened to perish? And, following my logic, if there are indeed many perished manuscripts with similar attributes, then, I regret to say, your idol Balzac was just the beneficiary of disproportionate luck compared to his peers. Furthermore, you may be committing an injustice to others by favoring him [emphasis mine].”

It is not enough to study the traits of the successful (people, projects, ideas, etc.), we must also study the traits of the failures if we are going to gain any valuable knowledge or insight into causal relationships.

A hypothetical example of the effect (or lack thereof) that silent evidence can have on decision making: let’s say the sensational news of a disaster (like Hurricane Katrina) causes the government to unanimously and approvingly direct public funds to the cause of rebuilding the devastated town. Because public funds are a finite resource, other initiatives will suffer to a degree as a result. For instance, cancer research may take a hit (in the form of opportunity cost if not explicitly). The result could be lives lost to cancer that would have been saved. These lives come as a direct expense of the decision to direct funds for disaster relief, but they will never show up in the evidence because the relationships are subtle. Unfortunately, even if the data somehow shows that cancer patients can benefit to a greater degree, when compared to the images of shattered families, broken homes, and crying children that we see on the media, the lonely and depressed cancer patients lying in a hospital bed somewhere do not register as vividly emotionally.

When it comes to politics, “we can see what governments do, and therefore sing their praises – but we do not see the alternative. But there is an alternative; it is less obvious and remains unseen.”

When it comes to medicine, “our neglect of silent evidence kills people daily. Assume that a drug saves many people from a potentially dangerous ailment, but runs the risk of killing a few, with a net benefit to society. Would a doctor prescribe it? He has no incentive to do so. The lawyers of the person hurt by the side effects will go after the doctor like attack dogs, while the lives saved by the drug might not be accounted for anywhere. A life saved is a statistic; a person hurt is an anecdote. Statistics are invisible; anecdotes are salient.”

More in-depth essay on unseen evidence, “What We See and What We Don’t See” by Frédéric Bastiat (1850)

The bias to ignore silent evidence causes us to over-value successes, especially when we ourselves are the survivor (consider the argument that there must be a creator behind human life because of the extraordinary odds of it arising on its own. Yet if we were an outsider observing a large population of universes, we would expect a certain number of them to produce intelligent life).

To avoid self-sampling, consider the “reference point argument” which says “do not compute odds from the vantage point of the winning gambler,” or the successful mutual fund, or the successful entrepreneur, etc. “but from all those who started in the cohort. (…) If you look at the population of beginning gamblers taken as a whole, you can be close to certain that one of them … will show stellar results just by luck. (…) But from the reference point of the winner, (…) a long string of wins will appear to be too extraordinary an occurrence to be explained by luck.”

Ludic Fallacy

Gambling and casinos are terrible examples of uncertainty because in the long run, gaming odds are strictly Gaussian (meaning they fall along the bell curve) and outliers get ironed out in the averages.

Casinos that focus too much on controlling “risk” in the gaming systems (e.g. expensive monitoring systems to catch cheaters) may be unprepared for the real uncertainties that threaten their business (not that preventing cheaters and controlling whales is not important).

Taleb describes how one casino experienced its largest losses “outside their sophisticated models:”

  1. $100 million loss when a performer in the Siegfried and Roy show was maimed by a tiger. No one in the company had even conceived of the possibility.

  2. An offended contractor who was injured while working on a construction project was so angry he tried to detonate explosives in the casino.

  3. For years an employee of the casino had failed to file an important IRS form that documented gamblers’ profits. The result was an enormous fine that could have been made worse by the revocation of their license.

  4. The casino owner’s daughter was kidnapped and there was a ransom involved.

Conclusion: “the dollar value of these Black Swans, the off-model hits and potential hits I’ve just outlined, swamp the on-model risks by a factor of close to 1,000 to 1. The casino spent hundreds of millions of dollars on gambling theory and high-tech surveillance while the bulk of their risks came from outside their models. All this, and yet the rest of the world still learns about uncertainty and probability from gambling examples.”

Epistemic Arrogance

Epistemic Arrogance: when we are asked to evaluate our own confidence about predictions, we tend to be very wrong. In fact, multiple experiments suggest that when we believe our prediction has just a 2 percent error rate, the actual error rate is between 15 and 30 percent.

“Epistemic arrogance bears a double effect: we overestimate what we know, and underestimate uncertainty, by compressing the range of possible uncertain states (i.e., by reducing the space of the unknown).”

In many cases, more information makes us worse off, because we form too many hypotheses from information which is really just random noise.

We do not change our theories easily, so it is better to hold off on forming concrete theories about the world. “When you develop your opinions on the basis of weak evidence, you will have difficulty interpreting subsequent information that contradicts these opinions.” See confirmation bias and commitment and consistency bias.

When considering the opinions and conclusions of “experts,” questioning their skill is far less important than questioning their error rate. In other words, how overconfident are they about their own predictions?

There are many professions in which “experts” are no better at making predictions than the layman, and therefore these professions have no real experts. These fields which “deal with the future and base their studies on the non-repeatable past have an expert problem.”

Economics (historically) has an expert problem.

“There are these people who produce forecasts uncritically. When asked why they forecast, they answer, ‘Well, that’s what we’re paid to do here.’ My suggestion: get another job.”

When considering government policy, or business decisions, it is often less important to consider a particular forecast than the worst case scenario.

Prediction Errors Explode Upwards When Extended Through Time

Almost all important inventions were created unexpectedly and not on a particular schedule. Although we have a tendency to look back and wonder why it “took so long to arrive at something so obvious.”

On the subject of new inventions and technology: “prediction requires knowing about technologies that will be discovered in the future. But that very knowledge would almost automatically allow us to start developing those technologies right away. Ergo, we do not know what we will know.”

The error rate in predictions and forecasts concerning future trajectories in a dynamic system explode upwards the further out in time you go. For example, to predict where the que ball will send the first ball it impacts is relatively simple. If you want to predict how that second ball will affect the third ball, you have a challenge. To predict the 50th interaction, you would need to understand the entire universe, “down to every single atom!”

If predictions involving billiard balls is that complicated, consider predictions involving the interactions of individual human beings, each possessing free will and their own incentives, assumptions, personalities, etc.

Trying to forecast social dynamics (in economics for example) presents several severe limitations. Individuals are not consistent in their choices and decisions because, as we have seen, their own predictions are prone to epistemic arrogance (false assumptions and over-confidence). More than this, their own predictions are random. Anchoring (bias) is one cause of variance in peoples’ predictions. If a person is asked to predict for example how many Uber drivers are on the road at any given time, their answer will be different if you had first asked them what the last four digits of their social security number is. So there are basic psychological tendencies at play in an unpredictable way.

“In people’s minds, the relationship between the past and the future does not learn from the relationship between the past and the past previous to it. There is a blind spot: when we think of tomorrow we do not frame it in terms of what we thought about yesterday on the day before yesterday. Because of this introspective defect we fail to learn about the difference between our past predictions and the subsequent outcomes. When we think of tomorrow, we just project it as another yesterday.”

“Accordingly, an element in the mechanics of how the human mind learns from the past makes us believe in definitive solutions – yet not consider that those who preceded us thought that they too had definitive solutions. We laugh at others and we don’t realize that someone will be just as justified in laughing at us on some not too remote day.”

Reverse engineering history is a bit like looking at a pool of water and trying to determine how the ice cubes were arranged before they melted (and are you so sure the puddle came as a result of melting ice cubes in the first place?)

“There is no functional difference in practice between” true randomness and deterministic chaos (i.e. a reality which is completely predictable if all parts are known) because “randomness, in the end, is just unknowledge.” In other words, whether the universe is truly random or deterministic makes no difference for us because of the reverse engineering problem.

What Can We Do?

Don’t stop making predictions, just avoid being a fool on the big things. “Do not listen to economic forecasters or to predictors in social science (they are all entertainers), but do make your own forecast for the picnic.”

Fear of failure, and shame over losses, can lead to “conservative” behavior which is actually more dangerous than “risk-taking.” Like “collecting nickels in front of steamrollers.”

For example (mine), dedicating your life to a single job in a “stable and safe” industry, in which you do not grow very much but have a dependable income. When huge shocks to the economy suddenly eliminate your job/company/position/etc., you may have more difficulty adjusting than the individual who spread himself much wider. Who worked multiple jobs, started different businesses, failed several times, but learned how to adapt to a dynamic and changing environment. In the end, who took the bigger “risk”? Who is more likely to survive?

Barbell Strategy: rather than aiming for a mildly aggressive or conservative strategy, combine hyper-conservative strategies with hyper-aggressive ones.

Some Tricks:

  1. Learn the difference between activities that tend to generate POSITIVE Black Swans vs NEGATIVE Black Swans. For example, catastrophe insurance businesses can only experience negative Black Swans (unexpected shocks) because of the nature of the business: insurance premiums (income) is fixed, while any large and unexpected shocks (like a hurricane) will necessitate payout (loss). An example of a positive Black Swan activity would be scientific research. Anything unexpected will by definition be a discovery (which is a good thing for your career). The worst case is that you never experience a Black Swan.

  2. Don’t be narrow-minded. Trying to “predict” the next Black Swan is impossible and likely to make you unprepared for one (as it will most likely be anything BUT what you were predicting).

  3. Seize opportunities. Say yes to anything that looks like an opportunity. “Many people do not realize that they are getting a lucky break in life when they get it. If a big publisher (or a big art dealer or a movie executive or a hotshot banker or a big thinker) suggests an appointment, cancel anything you have planned: you may never see such a window open up again.” For this reason alone you should (if you want to take advantage of uncertainty) live in big cities. The simple fact that you are exposed to more interactions opens you up to greater possibilities (that are unpredictable.

  4. Be skeptical of any precise plans made by the government.

”Avoid being blinded by the vividness of one single Black Swan. Have as many of these small bets as you can conceivably have. Even venture capital firms fall for the narrative fallacy with a few stories that ‘make sense’ to them; they do not have as many bets as they should. If venture capital firms are profitable, it is not because of the stories they have in their heads, but because they are exposed to unplanned rare events.”

When it comes to decision making, focus on the consequences, not the probabilities. “I don’t know the odds of an earthquake, but I can imagine how San Francisco might be affected by one. This idea that in order to make a decision you need to focus on the consequences (which you can know) rather than the probability (which you can’t know) is the central idea of uncertainty.”

“I have said that nobody is safe in Extremistan. This has a converse: nobody is threatened with complete extinction either. Our current environment allows the little guy to bide his time in the antechamber of success – as long as there is life, there is hope.”

We focus a lot on income inequality, but there are more troubling inequalities that we overlook. “The disproportionate share of the very few in intellectual influence is even more unsettling than the unequal distribution of wealth – unsettling because, unlike the income gap, no social policy can eliminate it.”

The bell curve (Gaussian) is inappropriately applied to social (Extremistan) data.

“Measures of uncertainty that are based on the bell curve simply disregard the possibility, and the impact, of sharp jumps or discontinuities and are, therefore, inapplicable in Extremistan.”

“Take any series of historical prices or values. Break it up into sub segments and measure its ‘standard’ deviation. Surprised? Every sample will yield a different ‘standard’ deviation. Then why do people talk about standard deviations? Go figure.”

Value erudition over status. Variety (shocks) over comfort.
Don’t worry so much about small failures. Worry about the large, fatal failures. Be skeptical of anything that is “safe.”

Worry less about embarrassment and more about missing opportunities.

Be aggressive in opportunities for positive Black Swans, and hyper conservative where negative Black Swans are concerned.

”We are quick to forget that just being alive is an extraordinary piece of good luck, a remote event, a chance occurrence of monstrous proportions. (…) So stop sweating the small stuff. Don’t be like the ingrate who got a castle as a present and worried about the mildew in the bathroom. Stop looking the gift horse in the mouth – remember that you are a Black Swan.”