How to Understand and Mitigate Risk

https://www.lesswrong.com/posts/eA9a5fpi6vAmyyp74/how-to-understand-and-mitigate-risk

Contents

Transparent Risks

Transparent risks are those risks that can be easily quantified and known, in advance. They’re equivalent to the picture above, with a transparent bag where I can count the exact amount of marbles in each bag. If I’m also certain about how much each marble is worth, then I have a simple strategy for dealing with risks in this situation.

How to Mitigate Transparent Risks: Do the Math

The simple strategy for transparent risks like the one above is to do the math.
Expected Value Expected value is a simple bit of probability theory that says you should multiply the likelihood of an event happening by the payoff to get your long run value over time. It’s a simple way to figure out if the risk is worth the reward in any given situation. The best introduction I know to expected value is here. Kelly Criterion The Kelly criterion is helpful when losing your entire bankroll is worse than other outcomes. I don’t fully understand it, but you should, and Zvi wrote a post in it here. (If someone would be willing to walk me through a few examples and show me where all the numbers in the equation come from, I’d be very grateful.)

Transparent Risks in Real Life

Drunk Driving Driving drunk is a simple, well studied risk on which you can quickly find probabilities of crash, injury and death to yourself and others. By comparing these costs to the costs of cab fare (and the the time needed to get your car in the morning if you left it), you can make a relatively transparent and easy estimate whether it’s worth driving at your Blood Alcohol Content level (spoiler alert, No if your BAC is anywhere near .08 on either side.) The same method can be used for any well-studied risks that exist within tight, slow changing bounds. Commodity and Utility Markets While most business opportunities are not transparent risks, an exception exists for commodities and utilities (in the sense mean’t by Wardley Mapping). It’s quite easy to research the cost of creating a rice farm, or a power plant, as well as get a tight bounded probability distribution for the expected price you can sell your rice or electricity at after making the initial investment. These markets are very mature and there’s unlikely to be wild swings or unexpected innovations that significantly change the market. However, because these risks are transparent it also means that competition drives margins down. The winners are those which can squeeze a little extra margin through economies of scale or other monopoly effects like regulatory capture. **Edit: **After being pointed to the data on commodities, I no longer lump them in with utilities as transparent risks and would call them more Knightian.

Opaque Risks

Opaque risks are those risks that can be easily quantified and unlikely to change, but which haven’t already been quantified/​aren’t easy to quantify just by research. They’re equivalent to the picture above, with an opaque bag that you know contains a static amount of a certain type of marble, but not the ratio of marbles to each other. As long as I’m sure that the bag contains only three types of marbles, and that the distribution is relatively static, a simple strategy for dealing with these risks emerges.

How to Mitigate Opaque Risks: Determine the Distribution

The simple strategy for opaque risks is to figure out the distribution. For instance, by pulling a few marbles at random out of the bag, you can over time become more and more sure about the distribution in the bag, at which point you’re now dealing with transparent risks. The best resource I know of for techniques to determine the distribution of opaque risks is How to Measure Anything by Douglas Hubbard. Sampling Sampling involves repeatedly drawing from the distribution in order to get an idea of what the distribution is. In the picture above, it would involve simply reaching your hand in and pulling a few marbles out. The bigger your sample, the more sure you can be about the underlying distribution. Modelling Modelling involves breaking down the factors that create the distribution, into as transparent pieces as possible. The classic example from fermi estimation is how many piano tuners there are in Chicago—that number may be opaque to you, but the number of people in Chicago is relatively transparent, as is the percentage of people that own pianos, the likelihood that someone will want their piano tuned, and the amount of money that someone needs to make a business worthwhile. These more transparent factors can be used to estimate the opaque factor of piano tuners.

Opaque Risks in Real Life

Choosing a Career You Don’t Like In the personal domain, opaque risks often take the form of very personal things that have never been measured because they’re unique to you. As a career coach, I often saw people leaping forward into careers that were smart from a global perspective (likely to grow, good pay, etc) but ignored the more personal factors. The solution was a two tier sampling solution: Do a series of informational interviews for the top potential job titles and potential industries, and then for the top 1-3 careers/​industries, see if you can do a form of job shadowing. This significantly helped cut down the risk by making an opaque choice much more transparent. Building a Product Nobody Wants In the business domain, solutions that are products(in Wardley Mapping terms) but are not yet commoditized often qualify as opaque risks. In this case, simply talking to customers, showing them a solution, and asking if they’ll pay, can save a significant amount of time and expense before actually building the product. Material on "lean startup" is all about how to do efficient sampling in these situations.

Knightian Risks

Knightian risks are those risks that exist in environments with distributions that are actively resistant to the methods used with opaque risks. There are three types of Knightian Risks: Black Swans, Dynamic Environments, and Adversarial Environments. A good portion of "actually trying to get things done in the real world" involves working with Knightian risks, and so most of the rest of this essay will focus on breaking them own into their various types, and talking about the various solutions to them. Milan Griffes has written about Knightian risks in an EA context on the EA forum, calling them "cluelessness".

Types of Knightian Risks

Black Swans A black swan risk is an unlikely, but very negative event that can occur in the game you choose to play. In the example above, you could do a significant amount of sampling without ever pulling the dynamite. However, this is quite likely a game you would want to avoid given the presence of the dynamite in the bag. You’re likely to severely overestimate the expected value of any given opportunity, and then be wiped out by a single black swan. Modelling isn’t useful because very unlikely events probably have causes that don’t enter into your model, and it’s impossible to know you’re missing them because your model will appear to be working accurately (until the black swan hits). A great resource for learning about Black Swans is the eponymous Black Swan, by Nassim Taleb. Dynamic Environments When your risks are changing faster than you can sample or model them, you’re in a dynamic environment. This is a function of how big the underlying population size is, how good you are at sampling/​modelling, and how quickly the distribution is changing. A traditional sampling strategy as described above involves first sampling, finding out your risks in different situations, then finally "choosing your game" by making a decision based on your sample. However, when the underlying distribution is changing rapidly, this strategy is rendered moot as the information your decision was based on quickly becomes outdated. The same argument applies to a modelling strategy as well. There’s not a great resource I know of to really grok dynamic environments, but an ok resource is Thinking in Systems by Donella Meadows (great book, but only ok for grokking the inability to model dynamic environments). Adversarial Environments When your environment is actively (or passively) working to block your attempts to understand it and mitigate risks, you’re in an adversarial environment. Markets are a typical example of an Adversarial Environment, as are most other zero sum games with intelligent opponents. They’ll be actively working to change the game so that you lose, and any change in your strategy will change their strategy as well.

Ways to Mitigate Knightian Risks

Antifragility Antifragility is a term coined by Nassim Taleb to describe systems that gain from disorder. If you think of the games described above as being composed of distributions, and then payoff rules that describe how you react to this distributions, anti-fragility is a look at how to create flexible payoff rules that can handle Knightian risks. Taleb has an excellent book on anti-fragility that I recommend if you’d like to learn more. In terms of the "marbles in a bag" metaphor, antifragility is a strategy where pulling out marbles that hurt you makes sure you get less and less hurt over time.

Knightian Risks in Real Life

0 to 1 Companies When a company is creating something entirely new (in the Wardley Mapping sense), it’s taking a Knightian risk. Sampling is fairly useless here because people don’t know they want what doesn’t exist, and naive approaches to modelling won’t work because your inputs are all junk data that exists without your product in the market. How would each of these strategies handle this situation?

Comment

https://www.lesswrong.com/posts/eA9a5fpi6vAmyyp74/how-to-understand-and-mitigate-risk?commentId=Hvybkwb7ZKKgTA4cx

How would you classify existential risks within this framework? (or would you?) Here’s my attempt. Any corrections or additions would be appreciated. Transparent risks: asteroids (we roughly know the frequency?)Opaque risks: geomagnetic storms (we don’t know how resistant the electric grid is, although we have an idea of their frequency), natural physics disasters (such as vacuum decay), killed by an extraterrestrial civilization (could also fit black swans and adversarial environments depending on its nature)Knightian risks:- Black swans: ASI, nanotech, bioengineered pandemics, simulation shutdown (assuming it’s because of something we did)- Dynamic environment: "dysgenic" pressures (maybe also adversarial), natural pandemics (the world is getting more connected, medicine more robust, etc. which makes it difficult how the risks of natural pandemics are changing), nuclear holocaust (the game theoretic equilibrium changes as we get nuclear weapon that are faster and more precised, better detectors, etc.)- Adversarial environments: resource depletion or ecological destruction, misguided world government or another static social equilibrium that stops technological progress, repressive totalitarian global regime, take-over by a transcending upload (?), our potential or even our core values are eroded by evolutionary development (ex.: Hansonian em world) Other (?): technological arrests ("The sheer technological difficulties in making the transition to the posthuman world might turn out to be so great that we never get there." from https://​​nickbostrom.com/​​existential/​​risks.html )

Comment

https://www.lesswrong.com/posts/eA9a5fpi6vAmyyp74/how-to-understand-and-mitigate-risk?commentId=XWQQx4W2n35n5b6kJ

This is great! I agree with most of these, and think it’s a useful exercise to do this classification.

https://www.lesswrong.com/posts/eA9a5fpi6vAmyyp74/how-to-understand-and-mitigate-risk?commentId=ECkRui5KZnS5ErSoM

Strong upvoted. This is a great overview, thanks for putting it together! I’m going to be coming back to this again for sure.

Note that Effectuation and Antifragility explicitly trade off against each other. Antifragility trades away certainty for flexibility while Effectuation does the opposite. Can you say more about this? You mention that effectuation involves "shift[ing] the rules such that the risks were no longer downsides", but that looks a lot like hormesis/​antifragility to me. The lemonade principle in particular feels like straight-up antifragility (unexpected events/​stressors are actually opportunities for growth).

Comment

https://www.lesswrong.com/posts/eA9a5fpi6vAmyyp74/how-to-understand-and-mitigate-risk?commentId=j3yRpPANvse9yevPf

That claim is something that often seems to be true, but it’s one of the things I’m unsure of as a general rule. I do know that in practice when I try to mitigate risk in my own projects, and I think of anti-fragile and effectuative strategies, they tend to be at odds with each other (this is true of both the "0 to 1 Companies" and "AGI Risk" examples below") The difference between hormesis and the lemonade principle is one of mindset.
In general, the anti-fragile mindset is "you don’t get to choose the game but you can make yourself stronger according to the rules." Hormesis from that mindset is "Given the rules of this game, how can I create a policy that tends to make me stronger to the different types of risks?" The effectuative mindset is "rig the game, then play it." From that perspective, the lemonade principle looks more like "Given that I failed to rig this game, how can I use the information I just acquired to rig a new game." You’re a farmer of a commodity and there’s an unexpected drought. The hormetic mindset is "store a bit more water in the future." (and do this every time there’s a draught). The lemonade mindset is "Start a draught insurance company that pays out in water."

Comment

I think I get you now, thanks. Not sure if this is exactly right, but one is proactive (preparing for known stressors) and one is reactive (response to unexpected stressors).

Comment

I’m not sure if this is the way I would think of it but I can kind of see it. I more think of them as different responses to the same sorts of stressors.

https://www.lesswrong.com/posts/eA9a5fpi6vAmyyp74/how-to-understand-and-mitigate-risk?commentId=Y7JMoutuqQ7sevFeB

I really enjoyed reading this. Quite concise, well organised and I thought quite comprehensive (nothing is ever exhaustive so no need to apologise on that front). I will find this a very useful resource and while nothing in it was completely "new" to me I found the structure really helped me to think more clearly about this. So thanks. A suggestion—might be useful to turn your attention more to specific process steps using the attention directing classification tools outlined here. For example Step 1: Identify type of risk (transparent, Opaque, Knightian) Step 2: List mitigation strategies for risk type—consider pros/​cons for each strategy Step 3: Weight strategy effectiveness according to pros/​cons and your ability to undertake etc—that’s just off the cuff—I’m sure you can do better :) One minor point on AGI—how can you " get a bunch of forecasting experts together " on something that doesn’t exist and on which there is not even clear agreement around what it actually is? I’m sure you are familiar with the astonishingly poor record on forecasts about AGI arrival (a bit like nuclear fusion and at least that’s reasonably well defined) For someone to be a "forecasting expert" on anything they have to have a track record of reliably forecasting something—WITH FEEDBACK—about their accuracy (which they use to improve). By definition such experts do not exists for something that has not yet come into being and around which there isn’t a specific and clear definition/​description. You might start by first gaining a real consensus on a very specific description of what it is you’re forecasting for and then maybe search for forecasting expertise in a similar area that already exists. But I think that would be difficult. AGI "forecasting" is replete with confirmation bias and wishful thinking (and if you challenge that you get the same sort of response you get from challenging religious people over the existence of their deity ;->) Thanks again—loved it

https://www.lesswrong.com/posts/eA9a5fpi6vAmyyp74/how-to-understand-and-mitigate-risk?commentId=SqCkXbSag8ZGYkGse

Thanks for writing this post! i always felt that many of Taleb’s concepts/​ideas were missing from LessWrong (which is not the same as saying that they’re missing from the community, it’s hard to know that, and i assume many are familiar). I thought about writing a few canonical posts myself about some of his ideas, but wasn’t sure how to go about it. More specifically i thought about making a Risk tag and noticed there were very few posts talking about risk (on the meta level) instead of a specific risk (like AI), and that was surprising to me.

https://www.lesswrong.com/posts/eA9a5fpi6vAmyyp74/how-to-understand-and-mitigate-risk?commentId=HYuomcXfXyetmDeEg

The images are all gone :(

Comment

https://www.lesswrong.com/posts/eA9a5fpi6vAmyyp74/how-to-understand-and-mitigate-risk?commentId=qwtmWPKtYTCXsefey

This is quite hard to debug because the images are showing up on my machine, even in incognito. Are they showing up now?

Comment

https://www.lesswrong.com/posts/eA9a5fpi6vAmyyp74/how-to-understand-and-mitigate-risk?commentId=pjpTmfGCYMMnDaAFA

Yeah, they were hosted on Matt’s website, and are now down. Though it probably also means they can still be restored.

Comment

This is quite hard to debug because the images are showing up on my machine, even in incognito. Are they showing up now?

Comment

Nope, still broken. When I try to access them the site asks me for a password (i.e. if I go directly to the link where they are hosted) so that’s probably related. I expect turning off that password protection will probably make them visible again.

https://www.lesswrong.com/posts/eA9a5fpi6vAmyyp74/how-to-understand-and-mitigate-risk?commentId=nGcdnNa9S6mJeSko4

Nice post! Have you seen the Cynefin framework? It covers some similar ideas.https://​​en.wikipedia.org/​​wiki/​​Cynefin_framework

https://www.lesswrong.com/posts/eA9a5fpi6vAmyyp74/how-to-understand-and-mitigate-risk?commentId=sPF35HeisTSNiYAv3

The Kelly criterion is...> (If someone would be willing to walk me through a few examples and show me where all the numbers in the equation come from, I’d be very grateful.)This would make for a really long comment—about a thousand words (explaining how to derive it). It should probably be a post instead, and in order to be readable, the writer would have to know how to make math formulas render properly instead of just being text. I do not know how to do that last thing, so The short version is:
The Kelly Criterion is a supposed to be a guide to "optimal betting" for an infinite number of bets, if you have the utility function U = ln(M), where M is how much money you have. The wikipedia page isn’t very helpful about the derivation anymore, but it has a link to what it says is the original paper: http://​​www.herrold.com/​​brokerage/​​kelly.pdf. The Kelly criterion is helpful when losing your entire bankroll is worse than other outcomes.This is because log($0) = - infinity utilons. If you don’t think being broke is the worst thing that could happen to you, this might not be your exact utility function.

Comment

https://www.lesswrong.com/posts/eA9a5fpi6vAmyyp74/how-to-understand-and-mitigate-risk?commentId=Qq6J6biwhNrJLeqRL

Thanks! I do get the purpose/​idea behind kelly criterion, but I don’t get how to actually do the math, nor how to intuitively think about it when making decisions the way I intuitively think about expected value.

Comment

Are you familiar with derivatives, and the properties of logarithms?

Comment

I am familiar with derivatives. I don’t remember the properties of logarithms but I half remember the base change one :).

https://www.lesswrong.com/posts/eA9a5fpi6vAmyyp74/how-to-understand-and-mitigate-risk?commentId=joPLaQKWyPWcnS8jH

Great post. Can you clarify for me: Are "Skin in the game", "Barbell", "Hormesis", "Evolution" and "Via Negativa" considered to be subsets of "Optionality" OR Are all 6 ("Skin in the game", "Barbell", "Hormesis", "Evolution", "Via Negativa" AND "Optionality") subsets of "Anti-fragility"? I understood the latter from the wording of the post but the former from the figure at the top. Same with "Effectuation" and "Pilot in plane" etc.

Comment

https://www.lesswrong.com/posts/eA9a5fpi6vAmyyp74/how-to-understand-and-mitigate-risk?commentId=F9cWjkekfKRaZwRpt

Sort of both. Both optionality and pilot in the plane principle are like "guiding principles" of anti-fragility and effectuation from which the subsequent principles fall out. However, they’re also good principles in their own rights and subsets of the broader concept. It might be that I should change the picture to reflect the second thing instead of the first thing, to prevent confusions like this one. A good exercise to see if you grock anti-fragility or effectuation is to go through each principle and explain how it follows from either Optionality or the Pilot-in-Plane principle respectively

https://www.lesswrong.com/posts/eA9a5fpi6vAmyyp74/how-to-understand-and-mitigate-risk?commentId=aQGjEb7a2wMCsQpYu

When your risks are changing faster than you can sample or model them, you’re in a dynamic environment.For some reason it never really occurred to me before that a fast enough sampling rate effectively makes the environment quasi-static for analysis purposes. That’s interesting. I think it might be because what little work I have done in dynamics also entailed an action against which the environment needed to modeled; so even an arbitrarily high sampling/​modeling speed doesn’t affect how much the environment changes between when the action initiates and when it completes. Quite separately, this post did a good job of incorporating everything I thought of that LessWrong has on risk all in the same post, and it would totally have been worth it if it did not do anything else. Strong upvote.

https://www.lesswrong.com/posts/eA9a5fpi6vAmyyp74/how-to-understand-and-mitigate-risk?commentId=aNxATb4XsGQ4RNr9h

I’m surprised about the examples you have for transparent risks. When it comes to drunk driving, I have no idea how many driving skills compare to the average person. Commodity markets do occasional move in price as well. https://​​www.indexmundi.com/​​commodities/​​?commodity=rice&months=60 suggests that there were two months in the last 5 years where rice prices shifted by more then 10%. That’s very different then the risk of winning the lottery where you can actually calculate the odds precisely. Taleb uses the term "ludic fallacy" for failing to distinguish those two types of risk. Given that you do quote Taleb later on, have you made a conscious decision to reject his notion of the "ludic fallacy"? If so, wha’t your reasoning for doing so?

Comment

https://www.lesswrong.com/posts/eA9a5fpi6vAmyyp74/how-to-understand-and-mitigate-risk?commentId=W55Jcf84qh8osXfhs

Yes I think I have different intuitions than Taleb here. When you think about Risk in terms of the strategies you use to deal with it, it doesn’t make sense to use for instance anti-fragility to deal with drunk driving on a personal level. It might make sense to use anti-fragility in general for risks of death, but the inputs for your anti-fragile decision should basically take the statistics for drunk driving at face value. I think it’s pretty similar to a lottery ticket in that 99% of the risk is transparent, and a remaining small amount is model uncertainty due to unknown unknowns (maybe someone will rig lottery) .The lucidic fallacy in that sense applies to every risk, because there’s always some small amount of model uncertainty (maybe a malicious demon is confusing me). One way to think about this is that your base risk is transparent and your model uncertainty is Knightian—this is a sensible way to approach all transparent risks, and it’s part of the justification for the barbell strategy.

Comment

How my own driving skill differs from the average person feels to me a straightforward known unknown. For rice prices there’s the known unknown whether and resulting global crop yield. For a business that sells crops it’s reasonable to buy options to protect against risk that come from the uncertainty about future prices.

Comment

How my own driving skill differs from the average person feels to me a straightforward known unknown.
I didn’t think of model where this mattered. I was more thinking of a model like "number of mistakes goes up linearly with alcohol consumption" than "number of mistakes gets multiplied by alcohol consumption". If the latter than this becomes an opaque risk (that can be measured by measuring your number of mistakes in a given time period). For a business that sells crops it’s reasonable to buy options to protect against risk that come from the uncertainty about future prices. Agreed. It also seems reasonable when selecting what commodity to sell to do a straight up expected value calculation based on historical data, and choose the one that has the the highest expected value. When thinking about it, perhaps there’s "semi-transparent risks" that are not that dynamic or adversarial but do have black swans, and that should be it’s own category above transparent risks, under which commodities and utilities would go. However, I think the better way to handle this is to treat the chance of black swan as model uncertainty that has knightian risk, and otherwise treat the investment as transparent based on historical data.

Comment

After having someone else on the EA forum also point me to the data on commodities, I’m now updating the post.

https://www.lesswrong.com/posts/eA9a5fpi6vAmyyp74/how-to-understand-and-mitigate-risk?commentId=tiffswN9kBxjey8GT

How you would classify other global catastrophic risks according to these types?

Comment

https://www.lesswrong.com/posts/eA9a5fpi6vAmyyp74/how-to-understand-and-mitigate-risk?commentId=jXASLF36omkoFJ6dy

I would say almost all global catastrophic risks would be classified as Knightian Risk. An exception might be something like an asteroid strike, which would be more opaque. Edit: changed meteor to asteroid.

https://www.lesswrong.com/posts/eA9a5fpi6vAmyyp74/how-to-understand-and-mitigate-risk?commentId=hpfKShkkFtsN5d3ZD

https://​​twitter.com/​​paulg/​​status/​​1110672251102416896 Regarding Transparent Risks and "Do the Math", reminded of this tweet Something I wish existed: a mobile app that dynamically calculates the probability you’re about to crash your car, based on your speed, the history of the piece of road you’re on, the weather, the time of day, accelerometer data, etc. The math isn’t that easy to do when you’re in the bar—and the sort of person who on the margin might take the bet—exactly the sort of thing that should be automated.