The Black Swan: The Impact of the Highly Improbable

⭐️⭐️⭐️

This book could have been a fourth the length (discard the latter half of the book in fact) and there are other books out there that are better at explaining this same concept. Taleb is entertaining, but goes on rants and pads hundreds of pages with unnecessary stories. He broadly dismisses statisticians and the entire field of social sciences including economics, calling most of them charlatans that don’t understand extreme events while cramming as much of his self-made jargon into the book when there are already existing words hoping that they will catch on instead. It feels like he wrote the book to be this long so that he can prove his expertise. He endlessly rants against the normal distribution, treating it as the only random distribution out there and attacks experts without engaging with the concepts he is critiquing. He preemptively attacks anyone who disagrees with him. I believe he contradicted himself a few times within the book.

He is right that extremes tend to be the things that matter the most and that these extremes tend to be unknowable (“unknown unknowns” or “black swans”, things that aren’t even in our models or thought processes), but he doesn’t do much in the way of saying how you should change your behavior other than “shit happens, be ready”. He practically proposes social sciences and statistics be thrown out because they do more harm than good. Most of the book is just skepticism done annoyingly.

Now for my summarized version of the good parts of the book (most of which are just a repackaged version of other concepts)

  • Problem of Induction/Humes Problem — Sometimes the past cannot be used to predict the future. Unknown unknowns happen all the time, a turkey that is cared for every day has no clue that the one that feeds it will be the one to kill and eat it. “You haven’t died, so you think you’re immortal??”

  • Some fields don’t really have experts, only people who are familiar with the latest models. ie: The people who don’t do much better than chance. Why are these people employed? Well there haven’t been studies on how effective they actually are when compared to chance. Additionally, people’s livelihoods depend on these fields being perceived as useful. As Warren Buffet said “never ask a barber if you need a haircut” and as Upton Sinclair said “It is difficult to get a man to understand something, when his salary depends on his not understanding it.” Just like how an oil rigging executive could convince themselves that oil is harmless, economists can sometimes convince themselves that their models are beneficial or useless (“all models are wrong but some are useful” — George E. P. Box) when in reality, it’s possible that some models are actually more harmful than no model at all.

  • Chaos Theory doesn’t hinge at all on true randomness, only the fact that the action space is so large that it is impossible to account for every thing. To predict the movement of anything not so far into the future, you have to take into account the gravitational pull of small amounts of matter millions of miles away. True randomness vs uncertainty doesn’t really matter, as we will never somehow be able to know the properties of everything around us.

  • The problem of positive empiricism vs negative empiricism. 1000 days can’t prove you right, but 1 day can prove you wrong. Science is mostly based on negative empiricism, where one tries to disprove their claim as a way to get at real knowledge. Forecasts tend to use positive empiricism or regressions

— things have happened this way before, and we will assume they will continue to happen like this. Hempel’s raven or the Raven Paradox is interesting in that it implies seeing something non-black not be a Raven supports the claim that “All Ravens are Black” despite it not seeming like seeing the non-black thing made you learn anything about Ravens.

  • Overconfidence is forced into our lives, particularly through school, where you are asked in history class to explain why certain events happened rather than saying you aren’t really sure. Being far from inevitable, it’s possible history could have been swayed by very small events. This is also known as the poverty of historicism — human knowledge and social phenomena are too complex and unpredictable for deterministic predictions of the future.

  • Being able to predict the future also means being able to predict future technologies. Being able to predict these technologies with enough accuracy essentially means that you already know how to make it, hence the issue. As my friend puts it “predicting the future is future-hard” (referring to np-hard problems) Scientists used to be seen as “Sleep Walkers” accidentally stumbling onto great discoveries such as the cosmic microwave background.

  • Taleb has some good comments on survivorship bias (though annoyingly he never calls it by its real name, only “silent evidence” and other things 😒). He talks about backwards causality and how in a weird way, we might be standing here today because the Bubonic Plague didn’t kill everyone and the Bubonic Plague didn’t kill everyone because we are standing here today. Refers to the fine-tuning argument for the universe’s creation. He also talks about survivorship bias in reference to beginner’s luck — it is true gamblers are more lucky in the beginning — and it’s because they were lucky that they became gamblers. He also talks about how many wildly rich people only made it to where they are because they took bad bets that led many others to ruin. The same kind of risk takers pass away early or make it big. But you only hear about the ones that make it big. Playing it safe might be better overall.

  • One of the concepts I haven’t heard before (but definitely wouldn’t be surprised if it existed before) is what the author calls the Ludic Fallacy or Platonism. It’s essentially the issue of assuming statistical models apply to situations where they don’t, leading to over confidence. It is confusing the map for the territory, almost even assuming reality is wrong when it doesn’t fit the model. This is well represented by Taleb’s story of Dr. John and Fat Tony. When presented with a coin you say is fair, it comes up heads 99 times. You then ask both what the next chance of it coming up as heads. Dr. John says that it is 50% likely because you said the coin is fair while Fat Tony says 99% because you are more likely to have lied about the coin being fair and confused him for a sucker (reminds me of the Parable of the Dagger https://youtu.be/sLVrQTd-OHI). In this situation, Fat Tony is more likely to be right — he is thinking outside of the provided box.

  • Quarterly reviews make long term payoffs hard since your manager is always looking for results.

  • Humans tend to make everything into a narrative, even if a narrative doesn’t exist. This is likely because it is easier to store patterns than disconnected datapoints, even if you are adding more info. Ex: “The king died then the queen died” vs “the king died then the queen died of grief”. This might be why we tend to over narrativize the past and leads to the poverty of historicism.