When Most VNM-Coherent Preference Orderings Have Convergent Instrumental Incentives

https://www.lesswrong.com/posts/LYxWrxram2JFBaeaq/when-most-vnm-coherent-preference-orderings-have-convergent

Contents

Intuition

The result follows because the VNM utility theorem lets you consider VNM-coherent preference orderings to be isomorphic to their induced utility functions (with equivalence up to positive affine transformation), and so these preference orderings will have the same generic incentives as the utility functions themselves.

Formalism

Let o_1, ..., o_n be outcomes, in a sense which depends on the context; outcomes could be world-states, universe-histories, or one of several fruits. Outcome lotteries are probability distributions over outcomes, and can be represented as elements of the n-dimensional probability simplex (ie as element-wise non-negative unit vectors). A preference ordering \prec is a binary relation on lotteries; it need not be eg complete (defined for all pairs of lotteries). VNM-coherent preference orderings are those which obey the VNM axioms. By the VNM utility theorem, coherent preference orderings induce consistent utility functions over outcomes, and consistent utility functions conversely imply a coherent preference ordering. Definition 1: Permuted preference ordering. Let \phi\in S_n be an outcome permutation, and let \prec be a preference ordering.\prec_\phi is the preference ordering such that for any lotteries L, M: L \prec_\phi M if and only if \phi(L)\prec \phi(M). EDIT: Thanks to Edouard Harris for pointing out that Definition 1 and Lemma 3 were originally incorrect. **Definition 2: Orbit of a preference ordering. **Let \prec be any preference ordering. Its orbit S_n \cdot \prec is the set {\prec_\phi \mid \phi \in S_n}. The orbits of coherent preference orderings are basically all the preference orderings induced by "relabeling" which outcomes are which. This is made clear by the following result: **Lemma 3: Permuting coherent preferences permutes the induced utility function. **Let \prec be a VNM-coherent preference ordering which induces VNM-utility function u, and let \phi\in S_n. Then \prec_\phi induces VNM-utility function u'(o_i) = u(\phi(o_i)), where o_i is any outcome. **Proof. **Let L, M be any lotteries.

Implications

The quest for better convergence theorems

Goal-directedness seems to more naturally arise from coherence over resources. (I think the word ‘resources’ is slightly imprecise here, because resources are only resources in the normal context of human life; money is useless when alone in Alpha Centauri, but time to live is not. So we want coherence over things-which-are-locally-resources, perhaps.) In his review of Seeking Power is Often Convergently Instrumental in MDPs, John Wentworth wrote:

in a real-time strategy game, units and buildings and so forth can be created, destroyed, and generally moved around given sufficient time. Over long time scales, the main thing which matters to the world-state is resources—creating or destroying anything else costs resources. So, even though there’s a high-dimensional game-world, it’s mainly a few (low-dimensional) resource counts which impact the long term state space. Any agents hoping to control anything in the long term will therefore compete to control those few resources. More generally: of all the many "nearby" variables an agent can control, only a handful (or summary) are relevant to anything "far away". Any "nearby" agents trying to control things "far away" will therefore compete to control the same handful of variables. Main thing to notice: this intuition talks directly about a feature of the world—i.e. "far away" variables depending only on a handful of "nearby" variables. That, according to me, is the main feature which makes or breaks instrumental convergence in any given universe. We can talk about that feature entirely independent of agents or agency. Indeed, we could potentially use this intuition to derive agency, via some kind of coherence theorem; this notion of instrumental convergence is more fundamental than utility functions. In his review of Coherent decisions imply consistent utilities, John wrote: "resources" should be a derived notion rather than a fundamental one. My current best guess at a sketch: the agent should make decisions within multiple loosely-coupled contexts, with all the coupling via some low-dimensional summary information—and that summary information would be the "resources". (This is exactly the kind of setup which leads to instrumental convergence.) By making pareto-resource-efficient decisions in one context, the agent would leave itself maximum freedom in the other contexts. In some sense, the ultimate "resource" is the agent’s action space. Then, resource trade-offs implicitly tell us how the agent is trading off its degree of control within each context, which we can interpret as something-like-utility. This seems on-track to me. We now know what instrumental convergence looks like in unstructured environments, and how structural assumptions on utility functions affect the shape and strength of that instrumental convergence, and this post explains the precise link between "what kinds of instrumental convergence exists?" and "what does VNM-coherence tell us about goal-directedness?". I’d be excited to see what instrumental convergence looks like in more structured models. Footnote representative: In terms of instrumental convergence, positive affine transformation never affects the optimality probability of different lottery sets. So for each (preference ordering) orbit element \prec_\phi, it doesn’t matter what representative we select from each equivalence class over induced utility functions — so we may as well pick u \circ \phi!

Comment

https://www.lesswrong.com/posts/LYxWrxram2JFBaeaq/when-most-vnm-coherent-preference-orderings-have-convergent?commentId=hcN48C49Fa4rXikMH

Copying over a Slack comment from Abram Demski:

I think this post could be pretty important. It offers a formal treatment of "goal-directedness" and its relationship to coherence theorems such as VNM, a topic which has seen some past controversy but which has—till now—been dealt with only quite informally. Personally I haven’t known how to engage with the whole goal-directedness debate, and I think part of the reason for that is the vagueness of the idea. Goal-directedness doesn’t seem that cruxy for most of my thinking, but some other people seem to really strongly perceive it as a crux for miri-type thought, and sometimes as a crux for AI risk more generally. (I once made a "tool AI" argument against AI risk myself, although in hindsight I would say that was all motivated cognition, which ignored the idea that even tool AI has to optimize strongly in order to have high capabilities.) So, as I see it, there’s been something of a stalemate between people who think the "goal-directed AI" vs "non-goal-directed AI" distinction is important for one reason or another, vs people who don’t think that. Alex Turner seems to give real technical meaning to this distinction, showing that most VNM-coherent preferences are indeed "goal directed" in the sense of acting broadly like we expect agents to act (that is, behaving in ways consistent with instrumental convergence). However, he also gives a class of VNM-coherent preferences which are not goal-directed in this sense, instead exhibiting essentially random behavior. This gives us a plausible formal proxy for the "goal directed vs not goal directed" distinction! I’m not sure how it can/​should carry the broader conversation forward, yet, but it seems like something to think about.

https://www.lesswrong.com/posts/LYxWrxram2JFBaeaq/when-most-vnm-coherent-preference-orderings-have-convergent?commentId=yE28D6w3vWuhqGRYr

Thanks for writing this. I have one point of confusion about some of the notation that’s being used to prove Lemma 3. Apologies for the detail, but the mistake could very well be on my end so I want to make sure I lay out everything clearly. First, \phi is being defined here as an outcome permutation. Presumably this means that 1) \phi(o_i) = o_j for some o_i, o_j; and 2) \phi admits a unique inverse \phi^{-1}(o_j) = o_i. That makes sense. We also define lotteries over outcomes, presumably as, e.g., L = \sum_{i = 1}^n \ell_i o_i, where \ell_i is the probability of outcome o_i. Of course we can interpret the o_i geometrically as mutually orthogonal unit vectors, so this lottery defines a point on the n-simplex. So far, so good. But the thing that’s confusing me is what this implies for the definition of \phi^{-1}(L). Because \phi is defined as a permutation over outcomes (and not over probabilities of outcomes), we should expect this to be \phi^{-1}(L) = \phi^{-1} \left( \sum_{i = 1}^n \ell_i o_i \right) = \sum_{i = 1}^n \ell_i \phi^{-1}(o_i)The problem is that this seems to give a different EV from the lemma: E_{o \sim \phi^{-1}(L)} [u(o)] = \sum_{i = 1}^n \ell_i u(\phi^{-1}(o_i)) = E_{o \sim L} [u(\phi^{-1}(o))](Note that I’m using o as the dummy variable rather than \ell, but the LHS above should correspond to line 2 of the proof.) Doing the same thing for the M lottery gives an analogous result. And then looking at the inequality that results suggests that lemma 3 should actually be "\prec_\phi induces u(\phi^{-1}(o_i))" as opposed to "\prec_\phi induces u(\phi(o_i))". (As a concrete example, suppose we have a lottery L = \ell_1 o_1 + \ell_2 o_2 + \ell_3 o_3 with the permutation \phi^{-1}(o_1) = o_2, \phi^{-1}(o_2) = o_3, \phi^{-1}(o_3) = o_1. Then \phi^{-1}(L) = \ell_1 o_2 + \ell_2 o_3 + \ell_3 o_1 and our EV is E_{o \sim \phi^{-1}(L)} [u(o)] = \ell_1 u(o_2) + \ell_2 u(o_3) + \ell_3 u(o_1) = E_{o \sim L} [u(\phi^{-1}(o))]Yet E_{o \sim L} [u(\phi(o))] = \ell_1 u(o_3) + \ell_2 u(o_1) + \ell_3 u(o_2) \neq E_{o \sim \phi^{-1}(L)} [u(o)] which appears to contradict the lemma as stated.) Note that even if this analysis is correct, it doesn’t invalidate your main claim. You only really care about the existence of a bijection rather than what that bijection is — the fact that your outcome space is finite ensures that the proportion of orbit elements that incentivize power seeking remains the same either way. (It could have implications if you try to extend this to a metric space, though.) Again, it’s also possible I’ve just misunderstood something here — please let me know if that’s the case!

Comment

https://www.lesswrong.com/posts/LYxWrxram2JFBaeaq/when-most-vnm-coherent-preference-orderings-have-convergent?commentId=vByMkm6d2kLuTHg5D

Thanks! I think you’re right. I think I actually should have defined \succ_\phi differently, because writing it out, it isn’t what I want. Having written out a small example, intuitively, L\succ_\phi M should hold iff \phi(L)\succ \phi(M), which will also induce u(\phi(o_i)) as we want. I’m not quite sure what the error was in the original proof of Lemma 3; I think it may be how I converted to and interpreted the vector representation. Probably it’s more natural to represent \mathbb{E}{\ell \sim \phi^{-1}(L)}[u(\ell)] as \mathbf{u}^\top \left(\mathbf{P}{\phi^{-1}} \mathbf{l}\right)=(\mathbf{u}^\top \mathbf{P}_{\phi^{-1}})\mathbf{l}, which makes your insight obvious. The post is edited and the issues should now be fixed.

Comment

No problem! Glad it was helpful. I think your fix makes sense.

I’m not quite sure what the error was in the original proof of Lemma 3; I think it may be how I converted to and interpreted the vector representation. Yeah, I figured maybe it was because the dummy variable \ell was being used in the EV to sum over outcomes, while the vector \mathbf{l} was being used to represent the probabilities associated with those outcomes. Because \ell and \mathbf{l} are similar it’s easy to conflate their meanings, and if you apply \phi to the wrong one by accident that has the same effect as applying \phi^{-1} to the other one. In any case though, the main result seems unaffected. Cheers!