Can a butterfly really cause a hurricane? Having been working in complexity science, I realize that I have a problem with the conventional understanding of the butterfly effect. Sure, in ideal deterministic chaotic systems, the butterfly effect does a great job illustrating the quasi-stochastic nature of chaos and sensitivity to small perturbations. But in the real complex world, far from a mathematically idealized deterministic model, there is no reasonable sense in which we can say that a butterfly’s wings can cause a hurricane. Even if this isn’t surprising, as intuitively people won’t attribute a hurricane to a butterfly, I would argue that causally attributing World War I to the assassination of Archduke Ferdinand follows the same fallacy. Consider a pot of supercooled water. Any minúte impurity in it can seed the freezing transition of the entire pot. But can we really say that this impurity "caused" the transition? This seems to depend on the counterfactual world: what would have happened otherwise? If the pot is sitting in an isolated clean room where there would not typically be any dust other perturbations, then this one particular impurity may be to blame for the phase-transition. But out in the messy "real world," abundant with impurities of all kinds, supercooled water would not survive long anyhow — if not this impurity, then another would have seeded the transition a moment later. Now, this counterfactual definition of causality, formalized by Judea Pearl using tools information theory, similarly gives problems for the classic interpretation of the butterfly effect. Would the hurricane have happened if not for the butterfly? To answer this question carefully, it isn’t enough to look at what the world would look like if we removed the butterfly, but held everything else fixed. Instead, we must consider the full statistical ensemble of possible world, and quantify to what extent the butterfly shifts that ensemble. To model this in a mathematically idealized setting, we could take some deterministic chaotic system, add some small stochastic noise to it at all times *t to generate the statistical ensemble of possibilities, and then consider the effect of some slight "butterfly" perturbation at a given time t. In the typical scenario, the impact of this single perturbation will not rise above the impact of the persistent background noise inherent to any complex real-world system. As such, the impact of butterfly’s wings will typically not rise above the persistent stochastic inputs affecting the Earth (such as from quantum noise or outer space). This reasoning also undermines some of the reductionist mechanistic perspective we often have on the world (especially in the proverbial "West"). Roughly speaking, it may often be quite difficult to cleanly attribute a real-world outcome to one particular cause. For example, regretful thinking like "if only I had…" often relies on holding the world fixed and only changing that one past decision — which is just as problematic as for butterflies and hurricanes. And so, perhaps indeed, "the devil is in the details" — it isn’t the one-time events that truly determine our lives, but rather it is how we guide their continual unfolding. Do you buy the logic—and the conclusion? Or is this a bit of a straw-man argument, and people don’t really think this way about causes? [cross-posted from my blog https://medium.com/bs3]
Let us ignore quantum indeterminacy.
The butterfly effect comes from chaos theory. The word "cause" as used in chaos theory means whether the a change to an initial state from t_a to t_b of a system from will change a final state f_a to final state f_b. Human intentional control is irrelevant.
In this comment I use chaos theory’s definition of "cause".
Yes.
The water wouldn’t have transitioned at that point in time. (Assuming there no other perturbation.)
Probably not. (Rounds to zero.)
No. You are conflating two different uses of the word "cause". One is a vague philosophical definition involving human intention. The other is a technical definition derived from mathematics.
You never defined the word "cause". I think that if you rigorously defined the word "cause" at the beginning of your post then your argument would selfdestruct. Either it would no longer apply to chaos theory or it would no longer apply to your scenarios involving human intentionality.
[Edit: This is overly harsh. See continued discussion.]
A straw-man argument is performed with the wrong intentions. I don’t think your argument is straw-man. Your intentions are fine. Your words are clear, coherent and falsifiable. Your argument is merely wrong.
[Edit: This is overly harsh. See continued discussion.]
Comment
Cool—thanks for your feedback! I agree that I could be more rigorous with my terminology. Nonetheless, I do think I have a rigorous argument underneath all this—even if it didn’t come across. Let me try to clarify:I did not mean to refer to human intentionality anywhere here. I was specifically trying to argue that the "chaos-theory definition of causality" you give, while great in idealized deterministic systems, is inadequate in complex messy "real world." Instead, the rigorous definition I prefer is the counter-factual information theoretic one, developed by Judea Pearl, and which I here tried to outline in layman’s terms. This definition is entirely ill-posed in a deterministic chaotic system, but will work as soon as we have any stochasticity (from whatever source). Does this address your point at all, or am I off-base?
Comment
You do address my point. This comment helped too. I think I understand better what you’re getting at now. I think you are trying to explain how attempting to trace causation back with the precision of chaos theory is impossible in complex real world situations of limited information and that an alternative definition of causation is necessary to handle such contexts. Such contexts constitute the majority of practical experience.
I no longer believe your argument would selfdestruct if you included a rigorous definition of causality. I understand now that your argument does not depend on human intentionality. Neither is it wrong.
Comment
whow, some Bayesian updating there—impressive! :)
I commented about this on a related post.
Comment
ah yes, great minds think alike! =)What I really like about J. Pearl’s counter-factual causality framework is that it gives a way to make these arguments rigorously, and even to precisely quantify "how much did the butterfly cause the tornado"—in bits!
Comment
Yes!! Very cool—going even one meta level up. I agree that usefulness of proposed models is certainly the ultimate judge of whether it’s "good" or not. To make this even more concrete, we could try to construct a game and compare the mean performance of two agents having the two models we want to compare… I wonder if anyone’s tried that… As far as I know, the counterfactual approach is "state of the art" for understanding causality these days—and it is a bit lacking for the reason you say. This could be a cool paper to write!
Comment
The counterfactual approach is indeed very popular, despite its obvious limitations. You can see a number of posts from Chris Leung here on the topic, for example. As for comparing performance of different agents, I wrote a post about it some years ago, not sure if that is what you meant, or if it even makes sense to you.
Comment
hmm, so what I was thinking is whether we could give an improved definition of causality based on something like "A causes B iff the model [A causes B] performs superior to other models in some (all?) games / environments"—which may have a funny dependence on the game or environment we choose. Though as hard as the counterfactual definition is to work with in practice, this may be even harder… You post may be related to this, though not the same, I think. I guess what I’m suggesting isn’t directly about decision theory.
Comment
I’m strikethrough-ing this comment for being less kind than the author/post deserves. But it does make true and useful points, and I don’t have the energy to rewrite it to be kinder right now, so I’m not deleting it outright. The supercooled water example isn’t actually an example of chaos. It’s an example where the system is in a metastable state, and any perturbation causes it to switch to a more-stable state. Stable states are exactly what chaos isn’t. A better intuition for something chaos-like: imagine that we take add together a whole bunch of numbers, then check whether the result is odd or even. Changing any single number from odd to even, or vice-versa, causes the end result to flip. Chaos is like that: one small perturbation can cause a large-scale change (like changing the path of a hurricane); there are a wide variety of possible small perturbations, any one of which could cause the large-scale outcome to change back and forth between possible outcomes.
Comment
I’m not sure why this was crossed out—seems quite civil to me… And I appreciate your thoughts on this! I do think we agree at the big-picture level, but have some mismatch in details and language. In particular, as I understand J. Pearl’s counter-factual analysis, you’re supposed to compare this one perturbation against the average over the ensemble of all possible other interventions. So in this sense, it’s not about "holding everything else fixed," but rather about "what are all the possible other things that could have happened."
Comment
I believe that would be an interventional analysis, in Pearl’s terms, not a counterfactual analysis.
Comment
cool—and I appreciate that you think my posts are promising! I’m never sure if my posts have any meaningful ‘delta’ - seems like everything’s been said before. But this community is really fun to post for, with meaningful engagement and discussion =)
Thank you for this interesting post. Could you clarify your assertion that the real world is not an idealistic deterministic system? Of course we cannot model it as such, but ignoring quantum effects, the world is deterministic. In that sense it seems to me that we might be unable to never confidently conclude that the butterfly caused the hurricane, but it could still be true. (and yes, in that Buddhist fable, my position has always been that trees do fall down, even if nobody sees it)
If "cause" means that given A, B must necessarily happen , irrespective if other factors , then the butterfly wing and the assassination aren’t causes. But there are other was of defining "cause" where they are!
If "cause" means that given A, B must necessarily happen , and nothing but A could bring about B , then the impurity wasn’t the cause. But there are other was of defining "cause" where it was!
Sure, the butterfly is really minor compared to everything else going on, and so only "causes" the hurricane if you unnaturally consider the butterfly as a variable while many more important factors are held fixed. But, I don’t believe the assassination of Franz Ferdinand is in the same category. While there’s certainly a danger that hindsight could make certain events look more pivotal than they really were, the very fact that we have a natural seeming chain of apparent causation from the assassination to the war is evidence against it being a "butterfly effect".
Comment
Yeah, I’m quite curious to understand this point too—certainly not sure how far this reasoning can be applied (and whether Ferdinand is too much of a stretch). I was thinking of this assassination as the "perturbation in a super-cooled liquid"—where it’s really the overall geopolitical tension that was the dominant cause, and anything could have set off the global phase transition. Though this gets back to the limitations of counter-factual causality in the real-world...
Comment
It not determined by reality, it’s determined by your interests. The geopolitical tension and the assassination are both reasonable answers, depending on the exact question.