The Black Swan: The Impact of the Highly Improbable
âď¸âď¸âď¸
This book could have been a fourth the length (discard the latter half of the book in fact) and there are other books out there that are better at explaining this same concept. Taleb is entertaining, but goes on rants and pads hundreds of pages with unnecessary stories. He broadly dismisses statisticians and the entire field of social sciences including economics, calling most of them charlatans that donât understand extreme events while cramming as much of his self-made jargon into the book when there are already existing words hoping that they will catch on instead. It feels like he wrote the book to be this long so that he can prove his expertise. He endlessly rants against the normal distribution, treating it as the only random distribution out there and attacks experts without engaging with the concepts he is critiquing. He preemptively attacks anyone who disagrees with him. I believe he contradicted himself a few times within the book.
He is right that extremes tend to be the things that matter the most and that these extremes tend to be unknowable (âunknown unknownsâ or âblack swansâ, things that arenât even in our models or thought processes), but he doesnât do much in the way of saying how you should change your behavior other than âshit happens, be readyâ. He practically proposes social sciences and statistics be thrown out because they do more harm than good. Most of the book is just skepticism done annoyingly.
Now for my summarized version of the good parts of the book (most of which are just a repackaged version of other concepts)
-
Problem of Induction/Humes Problem â Sometimes the past cannot be used to predict the future. Unknown unknowns happen all the time, a turkey that is cared for every day has no clue that the one that feeds it will be the one to kill and eat it. âYou havenât died, so you think youâre immortal??â
-
Some fields donât really have experts, only people who are familiar with the latest models. ie: The people who donât do much better than chance. Why are these people employed? Well there havenât been studies on how effective they actually are when compared to chance. Additionally, peopleâs livelihoods depend on these fields being perceived as useful. As Warren Buffet said ânever ask a barber if you need a haircutâ and as Upton Sinclair said âIt is difficult to get a man to understand something, when his salary depends on his not understanding it.â Just like how an oil rigging executive could convince themselves that oil is harmless, economists can sometimes convince themselves that their models are beneficial or useless (âall models are wrong but some are usefulâ â George E. P. Box) when in reality, itâs possible that some models are actually more harmful than no model at all.
-
Chaos Theory doesnât hinge at all on true randomness, only the fact that the action space is so large that it is impossible to account for every thing. To predict the movement of anything not so far into the future, you have to take into account the gravitational pull of small amounts of matter millions of miles away. True randomness vs uncertainty doesnât really matter, as we will never somehow be able to know the properties of everything around us.
-
The problem of positive empiricism vs negative empiricism. 1000 days canât prove you right, but 1 day can prove you wrong. Science is mostly based on negative empiricism, where one tries to disprove their claim as a way to get at real knowledge. Forecasts tend to use positive empiricism or regressions
â things have happened this way before, and we will assume they will continue to happen like this. Hempelâs raven or the Raven Paradox is interesting in that it implies seeing something non-black not be a Raven supports the claim that âAll Ravens are Blackâ despite it not seeming like seeing the non-black thing made you learn anything about Ravens.
-
Overconfidence is forced into our lives, particularly through school, where you are asked in history class to explain why certain events happened rather than saying you arenât really sure. Being far from inevitable, itâs possible history could have been swayed by very small events. This is also known as the poverty of historicism â human knowledge and social phenomena are too complex and unpredictable for deterministic predictions of the future.
-
Being able to predict the future also means being able to predict future technologies. Being able to predict these technologies with enough accuracy essentially means that you already know how to make it, hence the issue. As my friend puts it âpredicting the future is future-hardâ (referring to np-hard problems) Scientists used to be seen as âSleep Walkersâ accidentally stumbling onto great discoveries such as the cosmic microwave background.
-
Taleb has some good comments on survivorship bias (though annoyingly he never calls it by its real name, only âsilent evidenceâ and other things đ). He talks about backwards causality and how in a weird way, we might be standing here today because the Bubonic Plague didnât kill everyone and the Bubonic Plague didnât kill everyone because we are standing here today. Refers to the fine-tuning argument for the universeâs creation. He also talks about survivorship bias in reference to beginnerâs luck â it is true gamblers are more lucky in the beginning â and itâs because they were lucky that they became gamblers. He also talks about how many wildly rich people only made it to where they are because they took bad bets that led many others to ruin. The same kind of risk takers pass away early or make it big. But you only hear about the ones that make it big. Playing it safe might be better overall.
-
One of the concepts I havenât heard before (but definitely wouldnât be surprised if it existed before) is what the author calls the Ludic Fallacy or Platonism. Itâs essentially the issue of assuming statistical models apply to situations where they donât, leading to over confidence. It is confusing the map for the territory, almost even assuming reality is wrong when it doesnât fit the model. This is well represented by Talebâs story of Dr. John and Fat Tony. When presented with a coin you say is fair, it comes up heads 99 times. You then ask both what the next chance of it coming up as heads. Dr. John says that it is 50% likely because you said the coin is fair while Fat Tony says 99% because you are more likely to have lied about the coin being fair and confused him for a sucker (reminds me of the Parable of the Dagger https://youtu.be/sLVrQTd-OHI). In this situation, Fat Tony is more likely to be right â he is thinking outside of the provided box.
-
Quarterly reviews make long term payoffs hard since your manager is always looking for results.
-
Humans tend to make everything into a narrative, even if a narrative doesnât exist. This is likely because it is easier to store patterns than disconnected datapoints, even if you are adding more info. Ex: âThe king died then the queen diedâ vs âthe king died then the queen died of griefâ. This might be why we tend to over narrativize the past and leads to the poverty of historicism.