1. Money wants to be linear, but wants even more to be logarithmic.
-
People sometimes talk as if risk-aversion (or risk-loving) is irrational in itself. It is true that VNM-rationality implies you just take expected values, and hence, don’t penalize variance or any such thing. However, *you are allowed to have a concave utility function, *such as utility which is logarithmic in money. This creates risk-averse behavior. (You could also have a convex utility function, creating risk-seeking behavior.)
-
Counterpoint: if you have risk-averse behavior, other agents can exploit you by selling you insurance. Hence, money flows from risk-averse agents to less risk-averse agents. Similarly, risk-seeking agents can be exploited by charging them for participating in gambles. From this, one might think a market will evolve away from risk aversion(/seeking), as risk-neutral agents accumulate money.
-
People clearly act more like money has diminishing utility, rather than linear utility. So revealed preferences would appear to favor risk-aversion. Furthermore, it’s clear that the amount of pleasure one person can get per dollar diminishes as we give that person more and more money.
-
On the other hand, that being the case, we can easily purchase a lot of pleasure by giving money to others with less. So from a more altruistic perspective, utility does not diminish nearly so rapidly.
-
Rationality arguments of the Dutch-book and money-pump variety require an assumption that "money" exists. This "money" acts very much like utility, suggesting that utility is supposed to be linear in money. Dutch-book arguments assume from the start that agents are willing to make bets if the expected value of those bets is nonnegative. Money-pump arguments, on the other hand, can establish this from other assumptions.
-
Stuart Armstrong summarizes the money-pump arguments in favor of applying the VNM axioms directly to real money. This would imply risk-neutrality and utility linear in money.
-
On the other hand, the Kelly criterion implies betting as if utility were logarithmic in money.
-
The Kelly criterion is not derived via Bayesian rationality, but rather, an asymptotic argument about average-case performance (which is kinda frequentist). So initially it seems this is no contradiction.
-
However, it is a theorem that a diverse market would come to be dominated by Kelly bettors, as Kelly betting maximizes long-term growth rate. This means the previous counterpoint was wrong: expected-money bettors profit in expectation from selling insurance to Kelly bettors, but the Kelly bettors eventually dominate the market.
-
Expected-money bettors continue to have the most money in expectation, but this high expectation comes from increasingly improbable strings of wins. So you might see an expected-money bettor initially get a lot of money from a string of luck, but eventually burn out.
-
(For example, suppose an investment opportunity triples money 50% of the time, and loses it all the other 50% of the time. An expected money bettor will go all-in, while a Kelly bettor will invest some money but hold some aside. The expected-money betting strategy has the highest expected value, but will almost surely be out in a few rounds.)
-
The kelly criterion still implies near-linearity for small quantities of money.
-
Moreover, the more money you have, the closer to linearity—so the larger the quantity of money you’ll treat as an expected-money-maximizer would.
-
This vindicates, to a limited extent, the idea that a market will approach linearity—Kelly bettors will act more and more like expected-money maximizers as they accumulate money.
-
As argued before, we get agents with a large bankroll (and so, with behavior closer to linear) selling insurance to Kelly agents with smaller bankroll (and hence more risk-averse), and profiting from doing so.
-
But everyone is still Kelly in this picture, making logarithmic utility the correct view.
-
So the money-pump arguments seem to almost pin us down to maximum-expectation reasoning about money, but actually leave enough wiggle room for logarithmic value.
-
If money-pump arguments for expectation-maximization doesn’t apply in practice *to money, *why should we expect it to apply elsewhere?
-
Kelly betting is fully compatible with expected utility maximization, since we can maximize the expectation of the logarithm of money. But if the money-pump arguments are our reason for buying into the expectation-maximization picture in the first place, then their failure to apply to money should make us ask: why would they apply to utility any better?
-
Candidate answer: utility is defined as the quantity those arguments work for. Kelly-betting preferences on money don’t actually violate any of the VNM axioms. Because the VNM axioms hold, we can re-scale money to get utility. That’s what the VNM axioms give us.
-
The VNM axioms only rule out extreme risk-aversion or risk-seeking where a gamble between A and B is outside of the range of values from A to B. Risk aversion is just fine if we can understand it as a re-scaling.
-
So any kind of re-scaled expectation maximization, such as maximization of the log, should be seen as a *success *of VNM-like reasoning, not a failure.
-
Furthermore, thanks to continuity, any such re-scaling will closely resemble linear expectation maximization when small quantities are involved. Any convex (risk-averse) re-scaling will resemble linear expectation more as the background numbers (to which we compare gains and losses) become larger.
-
It still seems important to note again, however, that the usual justification for Kelly betting is "not very Bayesian" (very different from subjective preference theories such as VNM, and heavily reliant on long-run frequency arguments).
2. Money wants to go negative, but can’t.
-
Money can’t go negative. Well, it can, just a little: we do have a concept of debt. But if the economy were a computer program, debt would seem like a big hack. There’s no absolute guarantee that debt can be collected. There are a lot of incentives in place to help ensure debt can be collected, but ultimately, bankruptcy or death or disappearance can make a debt uncollectible. This means money is in this weird place where we sort of act like it can go negative for a lot of purposes, but it also sort of can’t.
-
This is especially weird if we think of money as debt, as is the case for gold-standard currencies and similar: money is an IOU issued by the government, which can be repaid upon request.
-
Any kind of money is ultimately based on some kind of *trust. *This can include trust in financial institutions, trust that gold will still be desirable later, trust in cryptographic algorithms, and so on. But thinking about debt emphasizes that a lot of this trust is trust in people.
-
Money can have a scarcity problem.
-
This is one of the weirdest things about money. You might expect that if there were "too little money" the value of money would simply re-adjust, so long as you can subdivide further and the vast majority of people have a nonzero amount. But this is not the case. We can be in a situation where "no one has enough money"—the great depression was a time when there were too few jobs and too much work left undone. Not enough money to buy the essentials. Too many essentials left unsold. No savings to turn into loans. No loans to create new businesses. And all this, not because of any change in the underlying physical resources. Seemingly, economics itself broke down: the supply was there, the demand was there, but the supply and demand curves could not meet.
-
(I am not really trained in economics, nor a historian, so my summary of the great depression could be mistaken or misleading.)
-
My loose understanding of monetary policy suggests that scarcity is a concern even in normal times.
-
The scarcity problem would not exist if money could be reliably manufactured through debt.
-
I’m not really sure of this statement.
-
When I visualize a scarcity of money, it’s like there’s both work needing done and people needing work, but there’s not enough money to pay them. Easy manufacturing of money through debt should allow people to pay other people to do work.
-
OTOH, if it’s too easy to go negative, then the concept of money doesn’t make sense any more: spending money doesn’t decrease your buying power any more if you can just keep going into debt. So everyone should just spend like crazy.
-
Note that this isn’t a problem in theoretical settings where money is equated with utility (IE, when we assume utility is linear in money), because money is being inherently valued in those settings, rather than valued instrumentally for what it can get. This assumption is a convenient approximation, but we can see here that it radically falls apart for questions of negative bankroll—it seems easy to handle (infinitely) negative money if we act like it has intrinsic (terminal) value, but it all falls apart if we see its value as extrinsic (instrumental).
-
So it seems like we want to facilitate negative bank accounts "as much as possible, but not too much"?
-
Note that Dutch-book and money-pump arguments tend to implicitly assume an infinite bankroll, ie, money which can go negative as much as it wants. Otherwise you don’t know whether the agent has enough to participate in the proposed transaction.
-
Kelly betting, on the other hand, assumes a finite bankroll—and indeed, might have to be abandoned or adjusted to handle negative money.
-
I believe many mechanism-design ideas also rely on an infinite bankroll.
It’s true that diminishing marginal utility can produce some degree of risk-aversion. But there’s good reason to think that no plausible utility function can produce the risk-aversion we actually see—there are theorems along the lines of "if your utility function makes you prefer X to Y then you must also prefer A to B" where pretty much everyone prefers X to Y and pretty much no one prefers A to B. [EDITED to add:] Ah, found the specific paper I had in mind: "Diminishing Marginal Utility of Wealth Cannot Explain Risk Aversion" by Matthew Rabin. An example from the paper: if you always turn down a 50⁄50 bet where you could either lose $10 or gain $10.10, and if the only reason is the shape of your utility function, then you should also always turn down a 50⁄50 bet where you could either lose $1000 or gain all the money in the world. (However much money there is in the world.)
Comment
I didn’t believe that claim, so I looked at the paper. The key piece is that you must always turn down the 50⁄50 lose 10/gain 10.10 bet, no matter how much wealth you have—i.e. even if you had millions or billions of dollars, you’d still turn down the small bet. Considering that assumption, I think the real-world applicability is somewhat more limited than the paper’s abstract seems to indicate. That said, there are multiple independent lines of evidence in various contexts suggesting that humans’ degree of risk-aversion is too strong to be accounted for by diminishing marginals alone, so I do still think that’s true.
Comment
The paper has some more sophisticated examples that make less stringent assumptions. Here are a couple. "Suppose, for instance, we know a risk-averse person turns down 50-50 lose $100/gain $105 bets for any lifetime wealth level less than (say) $350,000, but know nothing about her utility function for wealth levels above $350,000, except that it is not convex. Then we know that from an initial wealth level of $340,000 the person will turn down a 50-50 bet of losing $4,000 and gaining $635,670. If we only know that a person turns down lose $100/gain $125 bets when her lifetime wealth is below $100,000, we also know she will turn down a 50-50 lose $600/gain $36 billion bet beginning from a lifetime wealth of $90,000."
Comment
Bets have fixed costs to them in addition to the change in utility from the money gained or lost. The smaller the bet, the more those fixed costs dominate. And at some point, even the hassle from just trying to figure out that the bet is a good deal dwarfs the gain in utility from the bet. You may be better off arbitrarily refusing to take all bets below a certain threshhold because you gain from not having overhead. Even if you lose out on some good bets by having such a policy, you also spend less overhead on bad bets, which makes up for that loss.
The fixed costs also change arbitrarily; if I have to go to the ATM to get more money because I lost a $10.00 bet, the disutility from that is probably going to dwarf any utility I get from a $0.10 profit, but whether the ATM trip is necessary is essentially random.
Of course you could model those fixed costs as a reduction in utility, in which case the utility function is indeed no longer logarithmic, but you need to be very careful about what conclusions you draw from that. For instance, you can’t exploit such fixed costs to money pump someone.
Comment
Yup, I agree with all that, and I think it is one of the reasons for (at least some instances of) loss aversion. I wonder whether there have been attempts to probe loss aversion in ways that get around this issue, maybe by asking subjects to compare scenarios that somehow both have the same overheads
Possibly relevant in the context of Kelly betting/ maximazing log wealth.
Comment
The idea is supposed to be that turning down the first sort of bet looks like ordinary risk aversion, the phenomenon that some people think concave utility functions explain; but that if the explanation is the shape of the utility function, then those same people who turn down the first sort of bet—which I think a lot of people do—should also turn down the second sort of bet, even though it seems clear that a lot of those people would not turn down a bet that gave them a 50% chance of losing $1k and a 50% chance of winning Jeff Bezos’s entire fortune. (I personally would probably turn down a 50-50 bet between gaining $10.10 and losing $10.00. My consciously-accessible reasons aren’t about losing $10 feeling like a bigger deal than gaining $10.10, they’re about the "overhead" of making the bet, the possibility that my counterparty doesn’t pay up, and the like. And I would absolutely take a 50-50 bet between losing $1k and gaining, say, $1M, again assuming that it had been firmly enough established that no cheating was going on.)
Comment
But would you continue turning down such bets no matter how big your bankroll is? A serious investor can have a lot of automated systems in place to reduce the overhead of transactions. For example, running a casino can be seen as an automated system for accepting bets with a small edge. (Similarly, you might not think of a millionaire as having time to sell you a ball point pen with a tiny profit margin. But a ball point pen company is a system for doing so, and a millionaire might own one.) If you were playing some kind of stock/betting market, you would be wize to write a script to accept such bets up to the Kelly limit, if you could do so. Also see my reply to koreindian.
Comment
My bankroll is already enough bigger than $10.10 that shortage of money isn’t the reason why I would not take that bet. I might well take a bet composed of 100 separate $10/$10.10 bets (I’d need to think a bit about the actual distribution of wins and losses before deciding) even though I wouldn’t take one of them in isolation, but that’s a different bet.
Yes, many humans exhibit the former betting behavior but not the latter. Rabin argues that an Eu maximizer doing the former will do the latter. Hence, we need to think of humans as something than Eu maximizers.
Comment
OK. But humans who work the stock market would write code to vacuum up 1000-to-1010 investments as fast as possible, to take advantage of them before others, so long as they were small enough compared to the bankroll to be approved of by fractional Kelley betting. Unless the point is that they’re *so small *that it’s not worth the time spend writing the code. But then the explanation seems to be perfectly reasonable attention allocation. We could model the attention allocation directly, or, we could model them as utility maximizers up to epsilon—like, they don’t reliably pick up expected utility when it’s under $20 or so. I’m not contesting the overall conclusion that humans aren’t EV maximizers, but this doesn’t seem like a particularly good argument.
Comment
Comment
Comment
Comment
Comment
I think if you just drop continuity from VNM you get this kind of picture, because the VNM continuity assumption corresponds to the Archimedian assumption for the reals.
I think there’s a variant of Cox’s theorem which similarly yields hyperreal/surreal probabilities (infinitesimals, not infinities, in that case).
If you want to condition on probability zero events, you might do so by rejecting the ratio formula for conditional probabilities, and instead giving a basic axiomatization of conditional probability in its own right. It turns out that, at least under one such axiom system, this is equivalent to allowing infinitesimal probability and keeping the ratio definition of conditional probability. (Sorry for not having the sources at the ready.)
Comment
Comment
Comment
The point with having a large number of bettors is to assume that they all get independent sources of randomness, so at least some will win all their bets. Handwavy math follows: Assume that we have n EV bettors and n Kelly bettors (each starting with $1), and that they’re presented with a string of bets with 0.75 probability of doubling any money they risk. The EV bettors will bet everything at each time-step, while the Kelly bettors will bet half at each time-step. For any timestep t, there will be an n such that approximately 0.75^{t} of EV bettors have won all their bets (by the law of large numbers), for a total earning of 0.75^t2^tn = 1.5^tn. Meanwhile, each Kelly bettor will in expectation multiply their earnings by 1.25 each time-step, and so in expectation have 1.25^t after t timesteps. By the law of large numbers, for a sufficiently large n they will in aggregate have approximately 1.25^tn. Since 1.5^tn > 1.25^tn, the EV-maximizers will have more money, and we can get an arbitrarily high probability with an arbitrarily large n.
Comment
Ah, I see. The usual derivation of the Kelly criterion explicitly assumes that there is a specific sequence of events on which people are betting (e.g. stock market movements or horse-race outcomes); the players do not get to all bet separately on independent sources of randomness. If they could do that, then it would change the setup completely—it opens the door to agents making profits by trading with each other (in order to diversify their portfolios via risk-trades with other agents). Generally speaking, with idealized agents in economic equilibrium, they should all trade risk until they all effectively have access to the same randomness sources. Another way to think about it: compare the performance of counterfactual Kelly and EV agents on the same opportunities. In other words, suppose I look at my historical stock picks and ask how I would have performed had I been a Kelly bettor or an EV bettor. With probability approaching 1 over time, Kelly betting will seem like a better idea than EV betting in hindsight.
Comment
Thanks, that way to derive it makes sense! The point about free trade also seems right. With free trade, EV bettors will buy all risk from Kelly bettors until the former is gone with high probabiliity.
So my point only applies to bettors that can’t trade. Basically, in almost every market, the majority of resources are controlled by Kelly bettors; but across all markets in the multiverse, the majority of resources are controlled by EV bettors, because they make bets such that they dominate the markets which contain most of the multiverse’s resources.
(Or if there’s no sufficiently large multiverse, Kelly bettors will dominate with arbitrary probability; but EV bettors will (tautologically) still get the most expected money.)
I can see how mathematicians would dislike an entity that lacks absolute guarantees, but it seems like a quite normal attribute to encounter in the real world.
That’s mostly accurate, but it leaves out an important step in the causal chain: the "too little money" meant that the wages which workers were accustomed to getting became too high. For reasons that are likely related to bargaining strategies, workers wouldn’t accept (or sometimes weren’t allowed to accept) wages that gave them fewer dollars, even when those fewer dollars bought them more goods than they were accustomed to.
In other words, there’s a path for the value of money to re-adjust, but there’s enough opposition to it that most economists have given up on it.
I’m unclear what "facilitate" is doing here. "Negative bank accounts" is one way to describe a solution, but deflation meant that pretty much everyone preferred a positive bank account to "borrow and invest".
Central banks know how to manufacture money. The main problems are figuring out the right amounts, and ensuring that central banks create those amounts.
Comment
Comment
I basically think you’re right, those arguments are weak, but this post was about me reasoning out some of the details for myself. You make a good point about independent risk. I had only half-noticed that point when thinking about this.
I think this (especially the second part) is missing a fundamental aspect of … well, not just money, but decision-making. It’s about expectations and projections into the future, not about the current definition or valuation. Debt is no more a hack than is money itself. Neither actually exist, they are simply contingent future values. Zero is not special in this world.
Here is Ole Peters: [Puzzle] "Voluntary insurance contracts constitute a puzzle because they increase the expectation value of one party’s wealth, whereas both parties must sign for such contracts to exist [Answer]: Time averages and expectation values differ because wealth changes are non-ergodic." Peters again: "Conceptually, its power derives from a new notion of rationality. Many reasonable models of wealth are non-stationary processes. Observables representing wealth then do not have the ergodic property of Section I, and therefore rationality must not be defined as maximizing expectation values of wealth. Rather, we propose as a null model to define rationality as maximizing the time-average growth of wealth." You write: "Kelly betting, on the other hand, assumes a finite bankroll—and indeed, might have to be abandoned or adjusted to handle negative money." [Negative Interest rate?] Can you explain more? Would love to fit this conceptually into Peter’s Non-ergodic growth rate theory
I really like this. I read part 1 as being about the way the economy or society implicitly imposes additional pressures on individuals’ utility functions. Can you provide a reference for the theorem that Kelly betters predominate?EtA: an observation: the arguments for expected value also assume infinite value is possible, which (module infinite ethics style concerns, a significant caveat...) also isn’t realistic.