Original Post: We present an algorithm [updated version], then show (given four assumptions) that in the limit, it is human-level intelligent and benign. Will MacAskill has commented that in the seminar room, he is a consequentialist, but for decision-making, he takes seriously the lack of a philosophical consensus. I believe that what is here is correct, but in the absence of feedback from the Alignment Forum, I don’t yet feel comfortable posting it to a place (like arXiv) where it can get cited and enter the academic record. We have submitted it to IJCAI, but we can edit or revoke it before it is printed. I will distribute at least min($365, number of comments * $15) in prizes by April 1st (via venmo if possible, or else Amazon gift cards, or a donation on their behalf if they prefer) to the authors of the comments here, according to the comments’ quality. If one commenter finds an error, and another commenter tinkers with the setup or tinkers with the assumptions in order to correct it, then I expect both comments will receive a similar prize (if those comments are at the level of prize-winning, and neither person is me). If others would like to donate to the prize pool, I’ll provide a comment that you can reply to. To organize the conversation, I’ll start some comment threads below:
-
Positive feedback
-
General Concerns/Confusions
-
Minor Concerns
-
Concerns with Assumption 1
-
Concerns with Assumption 2
-
Concerns with Assumption 3
-
Concerns with Assumption 4
-
Concerns with "the box"
-
Adding to the prize pool Edit 30/5/19: An updated version is on arXiv. I now feel comfortable with it being cited. The key changes:
-
The Title. I suspect the agent is unambitious for its entire lifetime, but the title says "asymptotically" because that’s what I’ve shown formally. Indeed, I suspect the agent is benign for its entire lifetime, but the title says "unambitious" because that’s what I’ve shown formally. (See the section "Concerns with Task-Completion" for an informal argument going from unambitious → benign).
-
The Useless Computation Assumption. I’ve made it a slightly stronger assumption. The original version is technically correct, but setting \beta is tricky if the weak version of the assumption is true but the strong version isn’t. This stronger assumption also simplifies the argument.
-
The Prior. Rather than having to do with the description length of the Turing machine simulating the environment, it has to do with the number of states in the Turing machine. This was in response to Paul’s point that the finite-time behavior of the original version is really weird. This also makes the Natural Prior Assumption (now called the No Grue Assumption) a bit easier to assess. Edit 17/02/20: Published at AAAI. The prior over world-models is now totally different, and much better. There’s no "amnesia antechamber" required. The Useless Computation Assumption and the No Grue Assumption are now obselete. The argument for unambitiousness now depends on the "Space Requirements Assumption", which we probed empirically. The ArXiv link is up-to-date.
Thanks for a really productive conversation in the comment section so far. Here are the comments which won prizes. Comment prizes: Objection to the term benign (and ensuing conversation). Wei Dei. Link. $20 A plausible dangerous side-effect. Wei Dai. Link. $40 Short description length of simulated aliens predicting accurately. Wei Dai. Link. $120 Answers that look good to a human vs. actually good answers. Paul Christiano. Link. $20 Consequences of having the prior be based on K(s), with s a description of a Turing machine. Paul Christiano. Link. $90 Simulated aliens converting simple world-models into fast approximations thereof. Paul Christiano. Link. $35 Simulating suffering agents. cousin_it. Link. $20 Reusing simulation of human thoughts for simulation of future events. David Krueger. Link. $20 Options for transfer:
If I have a great model of physics in hand (and I’m basically unconcerned with competitiveness, as you seem to be), why not just take the resulting simulation of the human and give it a long time to think? That seems to have fewer safety risks and to be more useful. More generally, under what model of AI capabilities / competitiveness constraints would you want to use this procedure?
Comment
I know I don’t prove it, but I think this agent would be vastly superhuman, since it approaches Bayes-optimal reasoning with respect to its observations. ("Approaches" because MAP → Bayes). For the asymptotic results, one has to consider environments that produce observations with the true objective probabilities (hence the appearance that I’m unconcerned with competitiveness). In practice, though, given the speed prior, the agent will require evidence to entertain slow world-models, and for the beginning of its lifetime, the agent will be using low-fidelity models of the environment and the human-explorer, rendering it much more tractable than a perfect model of physics. And I think that even at that stage, well before it is doing perfect simulations of other humans, it will far surpass human performance. We manage human-level performance with very rough simulations of other humans. That leads me to think this approach is much more competitive that simulating a human and giving it a long time to think.
Comment
Comment
Comment
By competitiveness, I meant usefulness per unit computation.
Comment
The algorithm takes an argmax over an exponentially large space of sequences of actions, i.e. it does 2^{episode length} model evaluations. Do you think the result is smarter than a group of humans of size 2^{episode length}? I’d bet against—the humans could do this particular brute force search, in which case you’d have a tie, but they’d probably do something smarter.
Comment
I obviously haven’t solved the Tractable General Intelligence problem. The question is whether this is a tractable/competitive framework. So expectimax planning would naturally get replaced with a Monte-Carlo tree search, or some better approach we haven’t thought of. And I’ll message you privately about a more tractable approach to identifying a maximum a posteriori world-model from a countable class (I don’t assign a very high probability to it being a hugely important capabilities idea, since those aren’t just lying around, but it’s more than 1%). It will be important, when considering any of these approximations, to evaluate whether they break benignity (most plausibly, I think, by introducing a new attack surface for optimization daemons). But I feel fine about deferring that research for the time being, so I defined BoMAI as doing expectimax planning instead of MCTS. Given that the setup is basically a straight reinforcement learner with a weird prior, I think that at that level of abstraction, the ceiling of competitiveness is quite high.
Comment
I’m sympathetic to this picture, though I’d probably be inclined to try to model it explicitly—by making some assumption about what the planning algorithm can actually do, and then showing how to use an algorithm with that property. I do think "just write down the algorithm, and be happier if it looks like a ‘normal’ algorithm" is an OK starting point though
Comment
Starting a new thread on this:
From Paul:
Here is an old post of mine on the hope that "computationally simplest model describing the box" is actually a physical model of the box. I’m less optimistic than you are, but it’s certainly plausible. From the perspective of optimization daemons / inner alignment, I think like the interesting question is: if inner alignment turns out to be a hard problem for training cognitive policies, do we expect it to become much easier by training predictive models? I’d bet against at 1:1 odds, but not 1:2 odds.
Comment
Comment
I agree that you don’t rely on this assumption (so I was wrong to assume you are more optimistic than I am). In the literal limit, you don’t need to care about any of the considerations of the kind I was raising in my post.
Given that you are taking limits, I don’t see why you need any of the machinery with forgetting or with memory-based world models (and if you did really need that machinery, it seems like your proof would have other problems). My understanding is:
Your already assume that you can perform arbitrarily many rounds of the algorithm as intended (or rather you prove that there is some n_0 such that if you ran n_0 steps, with everything working as intended and in particular with no memory corruption, then you would get "benign" behavior).
Any time the MAP model makes a different prediction from the intended model, it loses some likelihood. So this can only happen finitely many times in any possible world. Just take n_0 to be after the last time it happens w.h.p. What’s wrong with this?
Comment
Notational note: I use i_0 to denote the episode when BoMAI becomes demonstrably benign and n_0 for something else.
Comment
Comment
Ah I see what you’re saying. I suppose I constrained myself to producing an algorithm/setup where the asymptotic benignity result followed from reasons that don’t require dangerous behavior in the interim. Also, you can add another parameter to BoMAI where you just have the human explorer explore for the first E episodes. The i_0 in the Eventual Benignity Theorem can be thought of as the max of i’ and i″. i’ comes from the i_0 in Lemma 1 (Rejecting the Simple Memory-Based). i″ comes from the point in time when \hat{\nu}^{(i)} is \varepsilon-accurate on policy, which renders Lemma 3 applicable. (And Lemma 2 always applies). My initial thought was to set E so that the human explorer is exploring for the whole time when the MAP world-model was not necessarily benign. This works for i’. E can just be set to be greater than i’. The thing it doesn’t work for is i″. If you increase E, the value of i″ goes up as well. So in fact, if you set E large enough, the first time BoMAI controls the episode, it will be benign. Then, there is a period where it might not be benign. However, from that point on, the only "way" for a world-model to be malign is by being worse than \varepsilon-inaccurate on-policy, because Lemmas 1 and 2 have already kicked in, and if it were \varepsilon-accurate on-policy, Lemma 3 would kick in as well. The first point to make about this is that in this regime, benignity comes in tandem with intelligence—it has to be confused to be dangerous (like a self-driving car). The second point is: I can’t come up with an example of world-model which is plausibly maximum a posteriori in this interval of time, and which is plausibly dangerous (for what that’s worth; and I don’t like to assume it’s worth much because it took me months to notice \nu^\dagger).
Comment
The intuitive thing you are aiming at is stronger than what the theorem establishes (understandably!)
You probably don’t need the memory trick to establish the theorem itself.
Even with the memory trick, I’m not convinced you meet the stronger criterion. There are a lot of other things similar to memory that can cause trouble—the theorem is able to avoid them only because of the same unsatisfying asymptotic feature that would have caused it to avoid memory-based models even without the amnesia.
Comment
Comment thread: positive feedback
Comment
Upvoted for interesting experiments with bounties and comment formatting.
I like that you emphasize and discuss the need for the AI to not believe that it can influence the outside world, and cleanly distinguish this from it actually being able to influence the outside world. I wonder if you can get any of the benefits here without needing the box to actually work (i.e. can you just get the agent to believe it does? and is that enough for some form/degree of benignity?)
Comment
I think that I may want to make a more specific reply.
Comment thread: general concerns/confusions
Comment
Can you give some intuitions about why the system uses a human explorer instead of doing exploring automatically?
I’m concerned about overloading the word "benign" with a new concept (mainly not seeking power outside the box, if I understand correctly) that doesn’t match either informal usage or a previous technical definition. In particular this "benign" AGI (in the limit) will hack the operator’s mind to give itself maximum reward, if that’s possible, right?
The system seems limited to answering questions that the human operator can correctly evaluate the answers to within a single episode (although I suppose we could make the episodes very long and allow multiple humans into the room to evaluate the answer together). (We could ask it other questions but it would give answers that sound best to the operator rather than correct answers.) If you actually had this AGI today, what questions would you ask it?
If you were to ask it a question like "Given these symptoms, do I need emergency medical treatment?" and the correct answer is "yes", it would answer "no" because if it answered "yes" then the operator would leave the room and it would get 0 reward for the rest of the episode. Maybe not a big deal but it’s kind of a counter-example to "We argue that our algorithm produces an AGI that, even if it became omniscient, would continue to accomplish whatever task we wanted, instead of hijacking its reward, eschewing its task, and neutralizing threats to it, even if it saw clearly how to do exactly that."
(Feel free to count this as some number of comments between 1 and 4, since some of the above items are related. Also I haven’t read most of the math yet and may have more comments and questions once I understood the motivations and math better.)
Comment
Comment
This seems useful if you could get around the mind hacking problem, but how would you do that?
I don’t know how this would work in terms of your setup. The most obvious way would seem to require the two agents to simulate each other, which would be impossible, and I’m not sure what else you might have in mind.
Comment
On second thought, (even assuming away the mind hacking problem) if you ask about "how to make a safe unbounded AGI" and "what’s wrong with the answer" in separate episodes, you’re essentially manually searching an exponentially large tree of possible arguments, counterarguments, counter-counterarguments, and so on. (Two episodes isn’t enough to determine whether the first answer you got was a good one, because the second answer is also optimized for sounding good instead of being actually correct, so you’d have to do another episode to ask for a counter-argument to the second answer, and so on, and then once you’ve definitively figured out that some answer/node was bad, you have to ask for another answer at that node and repeat this process.) The point of "AI Safety via Debate" was to let AI do all this searching for you, so it seems that you do have to figure out how to do something similar to avoid the exponential search.
ETA: Do you know if the proposal in "AI Safety via Debate" is "asymptotically benign" in the sense you’re using here?
Comment
Comment
I guess we can incorporate into DEBATE the idea of building a box around the debaters and judge with a door that automatically ends the episode when opened. Do you think that would be sufficient to make it "benign" in practice? Are there any other ideas in this paper that you would want to incorporate into a practical version of DEBATE?
Comment
Add the retrograde amnesia chamber and an explorer, and we’re pretty much at this, right? Without the retrograde amnesia, it might still be benign, but I don’t know how to show it. Without the explorer, I doubt you can get very strong usefulness results.
I suspect that AI Safety via Debate could be benign for certain decisions (like whether to release an AI) if we were to weight the debate more towards the safer option.
Comment
Do you have thoughts on this?
Comment
Alternatively, the human might have a lot of adversarial examples and the debate becomes an exercise in exploring all those adversarial examples. I’m not sure how to tell what will really happen short of actually having a superintelligent AI to test with.
Comment
So you basically have the same AI play both sides of the zero-sum game, right? That seems like it should work, with the same caveat as for "AI Safety via debate", namely that it seems hard to predict what happens when you have superintelligent AIs play a zero-sum game with a human as the judge.
Comment
Yep.
Comment
With a debate-like setup, if one side (A) is about to lose a debate, it seems to have a high incentive to claim that the other side (B) trying to do a mind hack and that if the judge keeps paying attention to what B says (i.e., read any further output from B), they will soon be taken over. What is the judge supposed to do in this case? They could ask A to explain how B’s previous outputs constitute part of an attempt to mind hack, but A could concoct a story mixed with its own attempt to mind hack, and the judge can’t ask for any counter-arguments from B without risking being mind hacked by B.
(I realize this is a problem in "AI Safety via debate" as well, but I’m asking you since you’re here and Geoffrey Irving isn’t. :)
Comment
*but A could concoct a story … counterarguments from B .. mind hacked by B, right? I think the main contention of their paper is that it is harder to mystify a judge than it is to pierce through someone else mystifying a judge, so this shouldn’t be a problem. That said, here’s one possibility: if A ever suggests that you don’t read more output from B, don’t read anything more from B, then flip coins to give A a 25% chance of victory.
Comment
Yeah, I mixed up the A’s and B’s at the end. It’s fixed now. Thanks for pointing it out.
I don’t understand how the former implies the latter. Assuming the former is true (and it seems like a big assumption), why can’t what I suggested still happen?
But what about the case where B is actually trying to mind hack the judge? If you always give A a 25% chance of victory for suggesting or implying that you shouldn’t read more output from B, then mind hacking becomes a (mostly) winning strategy, since a player gets a 75% chance of victory from mind hacking even if the other side successfully convinces the judge that they’re trying to mind hack the judge. The equilibrium might then consist of a race to see who can mind hack the judge first, or (if one side has >75% chance of winning such a race due to first-mover or second-mover advantage) one side trying to mind hack the judge, getting blocked by the other side, and still getting 75% victory.
Comment
Comment
Oh, I see, I didn’t understand "it is harder to mystify a judge than it is to pierce through someone else mystifying a judge" correctly. So this assumption basically rules out a large class of possible vulnerabilities in the judge, right? For example, if the judge had the equivalent of a buffer overflow bug in a network stack, the scheme would fail. In that case, A would not be able to "pierce through" B’s attack and stop it with its words if the judge keeps listening to B (and B was actually attacking).
I don’t think the "AI safety via debate" paper actually makes arguments for this assumption (at least I couldn’t find where it does). Do you have reasons to think it’s true, or ideas for how to verify that it’s true, short of putting a human in a BoMAI?
Comment
Yeah… I don’t have much to add here. Let’s keep thinking about this. I wonder if Paul is more bullish on the premise that "it is harder to mystify a judge than it is to pierce through someone else mystifying a judge" than I am? Recall that this idea was to avoid
Comment
Instead of hypnosis, I’m more worried about the AI talking the operator into some kind of world view that implies they should be really generous to the AI (i.e., give it max rewards), or give some sequence of answers that feel extremely insightful (and inviting further questions/answers in the same vein). And then the operator might feel a desire afterwards to spread this world view or sequence of answers to others (even though, again, this wasn’t optimized for by the AI).
If you try to solve the mind hacking problem iteratively, you’re more likely to find a way to get useful answers out of the system, but you’re also more likely to hit upon an existentially catastrophic form of mind hacking.
I guess it depends on how many interactions per episode and how long each answer can be. I would say >.9 probability that hypnosis or something like what I described above is optimal if they are both long enough. So you could try to make this system safer by limiting these numbers, which is also talked about in "AI Safety via Debate" if I remember correctly.
Comment
Comment
Yeah, the threat model I have in mind isn’t the operator taking over the world or causing an extinction event, but spreading bad but extremely persuasive ideas that can drastically curtail humanity’s potential (which is part of the definition of "existential risk"). For example fulfilling our potential may require that the universe eventually be controlled mostly by agents that have managed to correctly solve a number of moral and philosophical problems, and the spread of these bad ideas may prevent that from happening. See Some Thoughts on Metaphilosophy and the posts linked from there for more on this perspective.
Comment
Let XX be the event in which: a virulent meme causes sufficiently many power-brokers to become entrenched with absurd values, such that we do not end up even satisficing The True Good. Empirical analysis might not be useless here in evaluating the "surprisingness" of XX. I don’t think Christianity makes the cut either for virulence or for incompatibility with some satisfactory level of The True Good. I’m adding this not for you, but to clarify for the casual reader: we both agree that a Superintelligence setting out to accomplish XX would probably succeed; the question here is how likely this is to happen by accident if a superintelligence tries to get a human in a closed box to love it.
Comment
Suppose there are n forms of mind hacking that the AI could do, some of which are existentially catastrophic. If your plan is "Run this AI, and if the operator gets mind-hacked, stop and switch to an entirely different design." the likelihood of hitting upon an existentially catastrophic form of mind hacking is lower than if the plan is instead "Run this AI, and if the operator gets mind-hacked, tweak the AI design to block that specific form of mind hacking and try again. Repeat until we get a useful answer."
Comment
Hm. This doesn’t seem right to me. My approach for trying to form an intuition here includes returning to the example (in a parent comment)
Comment
The best alternative to "benign" that I could come up with is "unambitious". I’m not very good at this type of thing though, so maybe ask around for other suggestions or indicate somewhere prominent that you’re interested in giving out a prize specifically for this?
Comment
What do you think about "aligned"? (in the sense of having goals which don’t interfere with our own, by being limited in scope to the events of the room)
Comment
To clarify, I’m looking for:
"We’re talking about what you do, not what you do."
"Suppose you give us a new toy/summarized toy, something like a room, an inside-view view thing, and ask them to explain what you desire."
"Ah," you reply, "I’m asking what you think about how your life would go if you lived it way up until now. I think I would be interested in hearing about that.
"Oh? I’d think about that, and I might want to think about it a bit more. So I would say, for example, that you might want to give someone a toy/summarized toy by the same criteria as other people in the room and make them play the role of toy-maximizing toy-maximizing toy-maximizing toy-maximizing toy-maximizing toy-maximizing toy-maximizing toy-maximizing toy-maximizing toy-maximizing toy-maximizing toy-maximizing toy-maximizing toy-maximizing toy-maximizing toy-maximizing toy-maximizing toy-maximizing toy-maximizing toy-maximizing toy-maximizing toy-maximizing.
It seems like the answer would be quite different.
"Oh, then," you say, "That seems like too much work. Let me try harder!"
"What about that—what does this all just—so close to the real thing, don’t you think? And that I shouldn’t think such things are real?"
"Not exactly. But don’t you think there should be any al variative reasons why this is always so hard, or that any al variative reasons are not just illuminative, or couldn’t be found some other way?"
"That’s not exactly how I would put it. I’m fully closed about it. I’m still working on it. I don’t know whether I could get this outcome without spending so much effort on finding this particular method of doing something, because I don’t think it would happen without them trying it, so it’s not like they’re trying to determine whether that outcome is real or not."
"Ah..." said your friend, staring at you in horror. "So, did you ever even think of the idea, or did it just
What do you think about "domesticated"?
Comment
My comment is more like:
A second comment, but it doesn’t seem worth an answer: it can’t be an explicit statement of what would happen if you tried this, and it seems to me unlikely that my initial reaction when it was presented in the first place was insincere, so it seems like a really good idea to let it propagate in your mind a little. I’m hoping a lot of good ideas do become useful this time.
There’s still an existential risk in the sense that the AGI has an incentive to hack the operator to give it maximum reward, and that hack could have powerful effects outside the box (even though the AI hasn’t optimized it for that purpose), for example it might turn out to be a virulent memetic virus. Of course this is much less risky than if the AGI had direct instrumental goals outside the box, but "benign" and "not existentially dangerous" both seem to be claiming a bit too much. I’ll think about what other term might be more suitable.
Comment
The first nuclear reaction initiated an unprecedented temperature in the atmosphere, and people were right to wonder whether this would cause the atmosphere to ignite. The existence of a generally intelligent agent is likely to cause unprecedented mental states in humans, and we would be right to wonder whether that will cause an existential catastrophe. I think the concern of "could have powerful effects outside the box" is mostly captured by the unprecedentedness of this mental state, since the mental state is not selected to have those side effects. Certainly there is no way to rule out side-effects of inside-the-box events, since these side effects are the only reason it’s useful. And there is also certainly no way to rule out how those side effects "might turn out to be," without a complete view of the future. Would you agree that unprecedentedness captures the concern?
Comment
I think my concern is a bit more specific than that. See this comment.
From the formal description of the algorithm, it looks like you use a universal prior to pick k, and then allow the k^{\text{th}} Turing machine to run for \ell steps, but don’t penalize the running time of the machine that outputs k. Is that right? That didn’t match my intuitive understanding of the algorithm, and seems like it would lead to strange outcomes, so I feel like I’m misunderstanding.
Comment
Yes this is correct. If you use the same bijection consistently from strings to natural numbers, it looks a little more intuitive than if you don’t. The universal prior picks k (the number) by outputting k as a string. The kth Turing machine is the Turing machine described by k as a string. So you end up looking at the Kolmogorov complexity of the description of the Turing machine. So the construction of the description of the world-model isn’t time-penalized. This doesn’t change the asymptotic result, so I went with the more familiar K(x) rather than translating this new speed prior into measure over finite strings, which would require some more exposition, but I agree with you it feels like there might be some strange outcomes "before the limit" as a result of this approach: namely, the code on the UTM that outputs the description of the world-model-Turing-machine will try to do as much of the computation as possible in advance, by computing the description of an speed-optimized Turing machine for when the actions start coming. The other reasonable choices here instead of K(x) are S(x) (constructed to be like the new speed prior here) and—\ell(x)the length of x. But \ell(x) basically tells you that a Turing machine with fewer states is simpler, which would lead to a measure over \mathcal{H}^\infty that is dominated by world-models that are just universal Turing machines, which defeats the purpose of doing maximum a posteriori instead of a Bayes mixture. The way this issue appears in the proof renders the Natural Prior Assumption less plausible.
Comment
This invalidates some of my other concerns, but also seems to mean things are incredibly weird at finite times. I suspect that you’ll want to change to something less extreme here. (I might well be misunderstanding something, apologies in advance.) Suppose the "intended" physics take at least 1E15 steps to run on the UTM (this is a conservative lower bound, since you have to simulate the human for the whole episode). And suppose \beta < 0.999 (I think you need \beta much lower than this). Then the intended model gets penalized by at least exp(1E12) for its slowness. For almost the same description complexity, I could write down physics + "precompute the predictions for the first N episodes, for every sequence of possible actions/observations, and store them in a lookup table." This increases the complexity by a few bits, some constant plus K(N|physics), but avoids most of the computation. In order for the intended physics to win, i.e. in order for the "speed" part of the speed prior to do anything, we need the complexity of this precomputed model to be at least 1E12 bits higher than the complexity of the fast model. That appears to happen only once N > BB(1E12). Does that seem right to you? We could talk about whether malign consequentialists also take over at finite times (I think they probably do, since the "speed" part of the speed prior is not doing any work until after BB(1E12) steps, long after the agent becomes incredibly smart), but it seems better to adjust the scheme first. Using the speed prior seems more reasonable, but I’d want to know which version of the speed prior and which parameters, since which particular problem bites you will depend on those choices. And maybe to save time, I’d want to first get your take on whether the proposed version is dominated by consequentialists at some finite time.
Comment
Comment
I say maybe > 1⁄5 chance it’s actually dominated by consequentialistsDo you get down to 20% because you think this argument is wrong, or because you think it doesn’t apply?
Comment
Do you get down to 20% because you think > this argument is wrong, or because you think it doesn’t apply?You argument is about a Bayes mixture, not a MAP estimate; I think the case is much stronger that consequentialists can take over a non-trivial fraction of a mixture. I think that the methods with consequentialists discover for gaining weight in the prior (before the treacherous turn) are mostly likely to be elegant (short description on UTM), and that is the consequentialists’ real competition; then [the probability the universe they live in produces them with their specific goals]or [the bits to directly specify a consequentialist deciding to to do this] set them back (in the MAP context).
Comment
I don’t see why their methods would be elegant. In particular, I don’t see why any of {the anthropic update, importance weighting, updating from the choice of universal prior} would have a simple form (simpler than the simplest physics that gives rise to life). I don’t see how MAP helps things either—doesn’t the same argument suggest that for most of the possible physics, the simplest model will be a consequentialist? (Even more broadly, for the universal prior in general, isn’t MAP basically equivalent to a random sample from the prior, since some random model happens to be slightly more compressible?)
Comment
Let’s say \beta = 0.9, c=1/20.
Comment
How does your AI know to avoid running internal simulations containing lots of suffering?
Comment
It does not; thank you for pointing this out! This feature would have to be added on. Maybe you can come up with a way.
Comment
Would you mind explaining what the retracted part was? Even if it was a mistake, pointing it out might be useful to others thinking along the same lines.
Comment
Sorry, I probably shouldn’t have written the sentence in the first place; it was an AI capabilities idea.
Are there UDT-ish concerns with breaking isolation of episodes? For example, if the AI receives a low reward at the beginning of episode 117, does it have an incentive to manipulate the external world to make episode 117 happen many times somehow, with most of these times giving it a higher reward? For another example, can the AI at episode 117 realize that it’s in a game theory situation with the AI at episodes 116 and 118 and trade rewards with them acausally, leading to long-term goal directed behavior?
Comment
Comment
My concern is that since CDT is not reflectively stable, it may have incentives to create non-CDT agents in order to fulfill instrumental goals.
Comment
If I understand correctly, it’s actually updateless within an episode, and that’s the only thing it cares about so I don’t see how it would not be reflectively stable. Plus, even if it had an incentive to create a non-CDT agent, it would have to do that by outputting some message to the operator, and the operator wouldn’t have the ability to create a non-CDT agent without leaving the room which would end the episode. (I guess it could hack the operator’s mind and create a non-CDT agent within, but at that point it might as well just make the operator give it max rewards.)
Comment
With the correction that it is updateless and CDT (see here), I agree with the rest of this.
It’s actually updateless within an episode, right, because it finds the optimal policy at the beginning of an episode and then the same policy is used throughout the episode? I think this means there’s no reason for the operator to enter a reward at every timestamp, and instead we could let them just input a total reward at any time in the episode (or they could enter any number of rewards, and the optimization is based on the last reward entered). This would remove the incentive for the AI to keep the operator in the room as long as possible and would partially address item 4 in my comment.
Comment
So "updateless" is a reasonable term to apply to BoMAI, but it’s not an updateless decision theorist in your sense (if I understand correctly). An updateless decision theorist picks a policy that has the best consequences, without making assumption that its choice of policy affects the world only through the actions it picks. It considers the possibility that an another agent will be able to perfectly simulate it, so if it picks policy 1 at the start, the other agent will simulate it following policy 1, and if it picks policy 2, the other agent will simulate it picking policy 2. Since this is an effect that isn’t mediated by actual choice of action, updatelessness ends up having consequences. If an agent picks an expectimax policy under the assumption that the only way this choice impacts the environments is through the actions it takes (which BoMAI assumes), then it’s isomoprhic whether it computes \hat{\nu}^{(i)}-expectimax as it goes, or all at once at the beginning. The policy at the beginning will include contingencies for whatever midway-through-the-episode position the agent might land in, and as for what to do at that point, it’s the same calculation being run. And this calculation is CDT. I guess this means, and I’ve never thought about this before so this could easily be wrong, under the assumption that a policy’s effect on the world is screened off by which actions it takes, CDT is reflectively stable. (And yes, you could just give one reward, which ends the episode.)
Comment
Comment
Okay so I think you could construct a world-model that reflects this sort of reasoning, where it associates reward with the reward provided to a randomly sampled instance of its algorithm in the world in a way that looks like this. But the "malign output that would result in additional invocations of itself" would require the operator to leave the room, so this has the same form as, for example, \nu^\dagger. At this point, I think we’re no longer considering anything that sounds like "episode 117 happening twice," but that’s fine. Also, just a side-note: this world-model would get ruled out if the rewards/observations provided to the two separate instances ever diverge.
Comment
Comment
So I think this proposal is a specific instance of a general principle: wireheading acts as a safety valve. As long as the AI is prone to wireheading, and you leave it some path to wireheading, the AI will never bother with anything more difficult than that path. In this particular case, it will never do anything harder than hacking its human operator, and that’s where most of the safety benefit comes from. This kind of setup creates a direct trade-off between safety and capability: the harder it is to hack the human operator, the more likely that some other escape channel is easier—manipulating electrical circuitry to push atoms around, or some weird thing like that. The AI will become an existential threat if you send in an operator who really doesn’t want to give it 100% reward, and make that operator hard to hack—then the AI has a strong incentive to gain whatever power it needs. (I definitely do not trust a door with a kill switch to keep a superintelligent AI in a box.) My intuition says that nothing else in the proposal actually matters for safety, except the wireheading safety valve aspect. In particular, I think just giving the AI direct exploration abilities won’t mess up safety, as long as the wireheading path is present and there’s not "99.99% certainty is better than 99.98%"-type failure mode.
Comment
Comment
Can you expand a bit on why a commitment to give a high reward won’t save us? Is it a matter of the AI seeking more certainty, or is there some other issue?
Comment
An example of a mind-killing "mind" to me, even if it has no direct, veridical content, being able to put the AI into an environment that seems to be too hostile.
the goal at stake is the ability to not just put a mind under the environment you think of as your true goal. (My current model of the world is that there’s a single goal, and only a single goal can be achieved in this world.)
the AI isn’t allowed to try and get out of an environment within which it’s in control. It can make its own goals—it can make money—by making a lot of money in the same way people enjoy huge amounts of free time.
the AI is allowed to run in a completely unpredictable environment, out of the experimental space. However, its options would be:
it can make thousands of copies of itself, only taking some of its resources and collecting enough money to run a very very complicated AI;
it can make thousands of copies of itself, only doing this very complicated behavior;
it can make thousands of copies of itself, each of which is doing it together, and collecting much more money in the course of its evolution (and perhaps also in the hands of other Minds), until it gets to the point where it can’t make millions of copies of itself, or if not it’s in a simulated universe as it intends to.
So what’s the right thing to do? Where should we be going with this?
"I see you mean something else" is also equivalent to "I don’t know how you mean something different".
You don’t think that, say, it’s better to be safe. You don’t know what’s going wrong. So you don’t want to put up with the problem and start trying new strategies, when no one’s already done something stupid. (It’s not clear to me at all how to resolve this problem. If you can’t be certain how to resolve this problem, then you’re not safe, because all you can do is to have more basic issues and bigger problems. But if you’re not sure how to resolve the problem, then you’re not safe, because all you can do is to have more basic issues and bigger problems, and you can always take a more careful approach.
There are probably other things (e.g. more complicated solutions, more complicated problems, etc.) which are more expensive, but I don’t think it’s something that is worth the risk to human civilization, and may be worth it. I think this is a useful suggestion, but it depends a bit on how it relates, and it’s probably not something that you can write up very precisely.
What if we had some simple way of solving this problem without needing to be safe? I think a solution to the problem would involve some serious technical effort, and an understanding that the "solving" problem won’t be solved by "solving", but it is the problem of Friendly AI which you see here not missing some big conceptual insight.
One way that I would go about solving the problem would be to build a safe AGI, and build the safety solution. That way "solving" problems won’t always be safe, but (and also won’t make the exact problem safe), the "solving" problem won’t always be safe, and any solution to safe AI will probably be safe. But it would be nice if it worked for practical purposes; if it worked for a big goal, the problem would be safe.
In the world where the solutions are safe, there are no fundamentally scary alternatives so long as their safety is secure, and so the safety solution won’t be scary to humans.
So, yes, it is an AGI safety problem that the system of AGIs will face, because it will not need to be dangerous. But what if the system of AGI does not need to be safe. The only reason to have an AI safety problem is that we want to have a system which is safe. So our AI safety problem will not always be scary to humans, but it definitely will be. We might not be able to solve it one way or another.
The way to make progress on safety is to build an AGI system that can create an AGI system sufficiently smart that at least one of the world’s most intelligent humans is be created. A system which has a safety net are extremely difficult to build. A system which has a safety net of highly trained humans is extremely difficult to build. And so on. The safety net of an AGI system can scale with time and scale with capability.
I think that the problem seems to be that if the world was already as dumb as we think, we should want to do great safety research. If you want to do great safety research, you are going to have to be a lot smarter than the average scientist or programmer. You can’t build an AGI that can actually accomplish anything to the world’s challenges. You have to be the first in person.
I would take a second to say that I want to focus more on these questions than on actually designing an AGI. In
This should probably have been better title than "So you want to have a complete and thorough understanding of the subject matter."
This should work to the extent that you should post a summary like the one I just gave, rather than the one Anna seems to think will be best for your audience. I think the sequence version should have been clarified to note that it’s very easy, and if we didn’t have the full version now (we should do that), or if we did have a version that just sounded so much like what you’ve planned for (the next post is a summary in the "So you want to have an understanding of the subject matter of"? That’s definitely something that is quite valuable to have here, and it’s important to get people to read it.
The conclusion seems false; AUP (IJCAI, LW) is a reward maximizer which does not exhibit this behavior. For similar reasons, the recent totalitarian convergence conjecture made here also seems not true.
Comment
AUP seems really promising. I just meant other algorithms that have been proven generally intelligent, which is really just AIXI, the Thompson Sampling Agent, BayesExp, and a couple other variants on Bayesian agents with large model classes.
This may be a dumb question, but how can you asymptotically guarantee human-level intelligence when the world-models have bounded computation time, and the human is a "computable function" that has no such limit? Is it because the number of Turing machines is infinite?
Comment
Not a dumb question; bounded computation time here means bounded computation time per episode, so really it’s linear computation time.
My main concern is that the system will be neither safe, nor useful. Its usefulness is limited by the number of tasks which could be solved in such setup. These tasks are probably some advises or projects: "should we develop wind energy" or "print the the blueprint of a nanorobot". The main problem is that an advice can look good but have some unintended consequences, which are not obvious. For example, widespread wind farms will kill birds and result in pest proliferation. If the AI continue to work, it could adjust its advise to this new data, but the program has stopped. In other words, sealed Oracle can give bad advice and can’t be punished for that. There will be strong incentive by the system’s owner to disable all this safety measures, and use the system in real world continuously, so all Omohundro’s drives appear all over again, but on the level of the system’s owners. Moreover, the system can feel rewarded now by knowing that it will in future dominate in the universe, so the system may feel the need to put a secret code in its output (if it is a long code) which will create many its copies in the wild. For example, in order to tile the universe with windfarms. If it can’t output complex code, it will be mostly useless.
Comment
Comment
The problem of any human operator is other human operators, e.g. Chinese vs. American. This creates exactly the same dynamics as was explored by Omohundro: the strong incentive to grab the power and take more risky actions. You dissect the whole system on two parts, and then claim that one of the parts is "safe". But the same thing can be done with any AI: just say that its memory or any other part is safe. If we look at the whole system including AI and its human operators it will have unsafe dynamics as whole. I wrote about it more in "Military AI as convergent goal of AI development"
Comment
What would constitute a solution to the problem of the race to the bottom between teams of AGI developers as they sacrifice caution to secure a strategic advantage besides the conjunction of a) technical proposals and b) multilateral treaties? Is your complaint that I make no discussion of b? I think we can focus on these two things one at a time.
Comment
There could be, in fact, many solutions, starting from prevention AI creation at all – and up to creation so many AIs that they will balance each other. I have an article with overview of possible "global" solutions. I don’t think you should discuss different global solutions, as it would be off topic. But the discussion of the whole system of "boxed AI + AI creators" may be interesting.
Comment thread: concerns with Assumption 1
Comment
Since the real world is quantum, does your UTM need to be quantum too? More generally, what happens if there’s a mismatch between what computations can be done efficiently in the real world vs on the UTM?
Also, I’m not sure what category this question falls under, but can you explain the new speed prior that you use, e.g., what problems in the old speed priors was it designed to solve? (I recall noticing some issues with Schmidhuber’s speed prior but can’t find the post where I wrote about it now.)
Comment
Comment
These three sources all say simulating a quantum system or computer on a classical computer takes exponential time. Does that make a difference?
Comment
It doesn’t make a difference. Thank you for actually looking into this though. I was thinking of this.
Comment
I’m surprised by this. Can you explain a bit more? I was thinking that an exponentially large computation bound for the TM that accurately simulates the real world would make its speed prior so small that it would be practically impossible for the AI to get enough inputs (i.e., messages from the operator) to update on to make that world model have the highest weight in the posterior.
Comment
Yes, in that sense it’s bad—I just meant it doesn’t make a difference to the theorems or pose a threat to Assumption 1. And it isn’t quite that bad. This effect also means you could safely increase \beta, (penalize slowness less) which causes BoMAI to start entertaining useful models of reality sooner. To be clear, this point mitigates the difficulty you describe; it certainly doesn’t go beyond cancelling it out.
Comment
My worry at this point is that if simulating the real world using actual physics takes exponential time on your UTM, the world model with the greatest posterior may not be such a simulation but instead for example an alien superintelligence that runs efficiently on a classical TM which is predicting the behavior of the operator (using various algorithms that it came up with that run efficiently on a classical computer) and at some point the alien superintelligence will cause BoMAI to output something to mind hack the operator and then take over our universe. I’m not sure which assumption this would violate, but do you see this as a reasonable concern?
Comment
The theorem is consistent with the aliens causing trouble any finite number of times. But each time they cause the agent to do something weird their model loses some probability, so there will be some episode after which they stop causing trouble (if we manage to successfully run enough episodes without in fact having anything bad happen in the meantime, which is an assumption of the asymptotic arguments).
Comment
Thanks. Is there a way to derive a concrete bound on how long it will take for BoMAI to become "benign", e.g., is it exponential or something more reasonable? (Although if even a single "malign" episode could lead to disaster, this may be only of academic interest.) Also, to comment on this section of the paper:
"We can only offer informal claims regarding what happens before BoMAI is definitely benign. One intuition is that eventual benignity with probability 1 doesn’t happen by accident: it suggests that for the entire lifetime of the agent, everything is conspiring to make the agent benign."
If BoMAI can be effectively controlled by alien superintelligences before it becomes "benign" that would suggest "everything is conspiring to make the agent benign" is misleading as far as reasoning about what BoMAI might do in the mean time.
Is this noted somewhere in the paper, or just implicit in the arguments? I guess what we actually need is either a guarantee that all episodes are "benign" or a bound on utility loss that we can incur through such a scheme. (I do appreciate that "in the absence of any other algorithms for general intelligence which have been proven asymptotically benign, let alone benign for their entire lifetimes, BoMAI represents meaningful theoretical progress toward designing the latter.")
Comment
Comment
I guess I was asking if it’s exponential in anything that would make BoMAI impractically slow to become "benign", so basically just using "exponential" as a shorthand for "impractically large".
Comment
Yes but algorithm B may be shorter than algorithm A, because it could take a lot of bits to directly specify an algorithm that would accurately predict a human using a classical computer, and less bits to pick out an alien superintelligence who has an instrumental reason to invent such an algorithm. If β is set to be so near 1 that the exponential time simulation of real physics can have the highest posterior within a reasonable time, the fact that B is slower than A makes almost no difference and everything comes down to program length.
Quantum mechanics is what’s making B being slower than A not matter (via the above argument).
Comment
If there’s an efficient classical approximation of quantum dynamics, I bet this has a concise and lovely mathematical description. I bet that description is much shorter than "in Conway’s game of life, the efficient approximation of quantum mechanics that whatever lifeform emerges will probably come up with." But I’m hesitant here. This is exactly the sort of conversation I wanted to have.
Comment
I doubt that there’s an efficient classical approximation of quantum dynamics in general. There are probably tricks to speed up the classical approximation of a human mind though (or parts of a human mind), that an alien superintelligence could discover. Consider this analogy. Suppose there’s a robot stranded on a planet without technology. What’s the shortest algorithm for controlling the robot such that it eventually leaves that planet and reaches another star? It’s probably some kind of AGI that has an instrumental goal of reaching another star, right? (It could also be a terminal goal, but there are many other terminal goals that call for interstellar travel as an instrumental goal so the latter seems more likely.) Leaving the planet calls for solving many problems that come up, on the fly, including inventing new algorithms for solving them. If you put all these individual solutions and algorithms together that would also be an algorithm for reaching another star but it could be a lot longer than the code for the AGI.
Comment
I see—so I think I make the same response on a different level then. My model for this is: the world-model is a stochastic simple world, something like Conway’s game of life (but with randomness). Life evolves. The output channel has distinguished within-world effects, so that inhabitants can recognize it. The inhabitants control the output channel and use some of their world’s noise to sample from a universal prior, which they then feed into the output channel. But they don’t just use any universal prior—they use a better one, one which updates the prior over world-models as if the observation has been made: "someone in this world-model is sampling from the universal prior." Maybe they also started with a speed prior of some form (which would cause them to be more likely to output the fast approximation of the human mind we were just discussing). And then after a while, they mess with the output. Whatever better universal prior they come up with (e.g. anthropically updated speed prior), I think has a short description—shorter than [- log prob(intelligent life evolves and picks it) + description of simple universe].
Comment
It doesn’t make sense to me that they’re sampling from a universal prior and feeding it into the output channel, because the aliens are trying to take over other worlds through that output channel (and presumably they also have a distinguished input channel to go along with it), so they should be focusing on finding worlds that both can be taken over via the channel (including figuring out the computational costs of doing so) and are worth taking over (i.e., offers greater computational resources than their own), and then generating outputs that are optimized for taking over those worlds. Maybe this can be viewed as sampling from some kind of universal prior (with a short description), but I’m not seeing it. If you think it can or should be viewed that way, can you explain more?
In particular, if they’re trying to take over a computationally richer world, like ours, they have to figure out how to make sufficient predictions about the richer world using their own impoverished resources, which could involve doing research that’s equivalent to our physics, chemistry, biology, neuroscience, etc. I’m not seeing how sampling from "anthropically updated speed prior" would do the equivalent of all that (unless you end up sampling from a computation within the prior that consists of some aliens trying to take over our world).
Comment
I think you might be more or less right here. I hadn’t thought about the can-do and the worth-doing update, in addition to the anthropic update. And it’s not that important, but for terminology’s sake, I forgot that the update could send a world-model’s prior to 0, so the prior might not be universal anymore. The reason I think of these steps as updates to what started as a universal prior, is that they would like to take over as many possible worlds as possible, and they don’t know which one. And the universal prior is a good way to predict the dynamics of a world you know nothing about.
Comment
I’m glad that I’m getting some of my points across, but I think we still have some remaining disagreements or confusions here.
That doesn’t seem right to me. A speed prior still favors short algorithms. If you’re trying to make predictions about a computationally richer universe, why favor short algorithms? Why not apply your intelligence to try to discover the best algorithm (or increasingly better algorithms), regardless of the length?
Also, sampling from a speed prior involves randomizing over a mixture of TMs, but from an EU maximization perspective, wouldn’t running one particular TM from the mixture give the highest expected utility? Why are the aliens sampling from the speed prior instead of directly picking a specific algorithm to generate the next output, one that they expect to give the highest utility for them?
What happens if β is too small? If it’s really tiny, then the world model with the highest posterior is random, right, because it’s "computed" by a TM that (to minimize run time) just copies everything on its random tape to the output? And as you increase β, the TM with highest posterior starts doing fast and then increasingly compute-intensive predictions?
I think if β is small but not too small, the highest posterior would not involve evolved life, but instead a directly coded AGI that runs "natively" on the TM who can decide to execute arbitrary algorithms "natively" on the TM.
Maybe there is still some range of β where BoMAI is both safe and useful (can answer sophisticated questions like "how to build a safe unbounded AGI") because in that range the highest posterior is a good non-life/non-AGI prediction algorithm. But A) I don’t know an argument for that, and B) even if it’s true, to take advantage of it would seem to require fine tuning β and I don’t see how to do that, given that trial-and-error wouldn’t be safe.
Comment
Comment
Comment
That’s what I was thinking too, but Michael made me realize this isn’t possible, at least for some M. Suppose M is the C programming language, but in C there is no way to say "interpret this string as a C program and run it as fast as a native C program". Am I missing something at this point?
I don’t understand this sentence.
Comment
Comment
Why is it a desirable property? I’m not seeing why it would be bad to choose a UTM that doesn’t have this property to define the speed prior for BoMAI, if that helps with safety. Please explain more?
Comment
I just mean: "universality" in the sense of a UTM isn’t a sufficient property when defining the speed prior, the analogous property of the UTM is something more like: "You can run an arbitrary Turing machine without too much slowdown." Of course that’s not possible, but it seems like you still want to be as close to that as possible (for the same reasons that you wanted universality at all). I agree that it would be fine to sacrifice this property if it was helpful for safety.
Ok, I see, so in other words the AGI doesn’t have the ability to write an arbitrary function in the base programming language and call it, it has a fixed code base and has to simulate that function using its existing code. However I think the AGI can still win a race against a straightforward "predict accurately" algorithm, because it can to two things. 1) Include the most important inner loops of the "predict accurately" algorithm as functions in its own code to minimize the relative slowdown (this is not a decision by the AGI but just a matter of which AGI ends up having the highest posterior) and 2) keep finding improvements to its own prediction algorithm so that it can eventually overtake any fixed prediction algorithm in accuracy which hopefully more than "pays for" the remaining slowdown that is incurred.
Comment
Let the AGI’s "predict accurately" algorithm be fixed. What you call a sequence of improvements to the prediction algorithm, let’s just call that the prediction algorithm. Imagine this to have as much or as little overhead as you like compared to what was previously conceptualized as "predict accurately." I think this reconceptualization eliminates 2) as a concern, and if I’m understanding correctly, 1) is only able to mitigate slowdown, not overpower it. Also I think 1) doesn’t work—maybe you came to this conclusion as well?
Comment
Comment
I’ve made a case that the two endpoints in the trade-off are not problematic. I’ve argued (roughly) that one reduces computational overhead by doing things that dissociate the naturalness of describing "predict accurately" and "treacherous turn" all at once. This goes back to the general principle I proposed above: "The more general a system is, the less well it can do any particular task." The only thing I feel like I can still do is argue against particular points in the trade-off that you think are likely to cause trouble. Can you point me to an exact inner loop that can be native to an AGI that would cause this to fall outside of this trend? To frame this case, the Turing machine description must specify [AGI + a routine that it can call]--sort of like a brain-computer interface, where the AGI is the brain and the fast routine is the computer.
Comment
(I actually have a more basic confusion, started a new thread.)
What happens if β is t*> oo *small?Just as you said: it outputs Bernoulli(1/2) bits for a long time. It’s not dangerous.
Comment
I just read the math more carefully, and it looks like no matter how small β is, as long as β is positive, as BoMAI receives more and more input, it will eventually converge to the most accurate world model possible. This is because the computation penalty is applied to the per-episode computation bound and doesn’t increase with each episode, whereas the accuracy advantage gets accumulated across episodes.
Assuming that the most accurate world model is an exponential-time quantum simulation, that’s what BoMAI will converge to (no matter how small β is), right? And in the meantime it will go through some arbitrarily complex (up to some very large bound) but faster than exponential classical approximations of quantum physics that are increasingly accurate, as the number of episodes increase? If so, I’m no longer convinced that BoMAI is benign as long as β is small enough, because the qualitative behavior of BoMAI seems the same no matter what β is, i.e., it gets smarter over time as its world model gets more accurate, and I’m not sure why the reason BoMAI might not be benign at high β couldn’t also apply at low β (if we run it for a long enough time).
(If you’re going to discuss all this in your "longer reply", I’m fine with waiting for it.)
Comment
The longer reply will include an image that might help, but a couple other notes. If it causes you to doubt the asymptotic result, it might be helpful to read the benignity proof (especially the proof of Rejecting the Simple Memory-Based Lemma, which isn’t that long). The heuristic reason for why it can be helpful to decrease \beta for long-run behavior, even though long-run behavior is qualitatively similar, is that while accuracy eventually becomes the dominant concern, along the way the prior is sort of a random perturbation to this which changes the posterior weight, so for two world-models that are exactly equally accurate, we need to make sure the malign one is penalized for being slower, enough to outweigh the inconvenient possible outcome in which it has shorter description length. Put another way, for benignity, we don’t need concern for speed to dominate concern for accuracy; we need it to dominate concern for "simplicity" (on some reference machine).
Comment
Yeah, I understand this part, but I’m not sure why, since the benign one can be extremely complex, the malign one can’t have enough of a K-complexity advantage to overcome its slowness penalty. And since (with low β) we’re going through many more different world models as the number of episodes increases, that also gives malign world models more chances to "win"? It seems hard to make any trustworthy conclusions based on the kind of informal reasoning we’ve been doing and we need to figure out the actual math somehow.
Comment
Sure, approaching from below is obvious, but that still requires knowing how wide the band of β that would produce a safe and useful BoMAI is, otherwise even if the band exists you could overshoot it and end up in the unsafe region.
ETA: But the first question is, is there a β such that BoMAI is both safe and intelligent enough to answer questions like "how to build a safe unbounded AGI"? When β is very low BoMAI is useless, and as you increase β it gets smarter, but then at some point with a high enough β it becomes unsafe. Do you know a way to figure out how smart BoMAI is just before it becomes unsafe?
Comment
Some visualizations which might help with this:
But then one needs to factor in "simplicity" or the prior penalty from description length:
Note also that these are average effects; they are just for forming intuitions. Your concern was:
Comment
Fixed your images. You have to press space after you use that syntax for the images to actually get fetched and displayed. Sorry for the confusion.
Comment
Thanks!
Longer response coming. On hold for now.
Comment
Comment
I think what you’re saying is that the following don’t commute: "real prior" (universal prior) + speed update + anthropic update + can-do update + worth-doing update compared to universal prior + anthropic update + can-do update + worth-doing update + speed update When universal prior is next to speed update, this is naturally conceptualized as a speed prior, and when it’s last, it is naturally conceptualized as "engineering reasoning" identifying faster predictions. I happy to go with the second order if you prefer, in part because I think they do commute—all these updates just change the weights on measures that get mixed together to be piped to output during the "predict accurately" phase.
Comment
The fast algorithms to predict our physics just aren’t going to be the shortest ones. You can use reasoning to pick which one to favor (after figuring out physics), rather than just writing them down in some arbitrary order and taking the first one.
Comment
Comment
Comment
Comment
The only point I was trying to respond to in the grandparent of this comment was your comment
Comment
Comment
I’m not sure which of these arguments will be more convincing to you.
Comment
Comment
Given a world model \nu, which takes k computation steps per episode, let \nu^{\log} be the best world-model that best approximates \nu (in the sense of KL divergence) using only \log k computation steps.\nu^{\log} is at least as good as the "reasoning-based replacement" of \nu. The description length of \nu^{\log} is within a (small) constant of the description length of \nu. That way of describing it is not optimized for speed, but it presents a one-time cost, and anyone arriving at that world-model in this way is paying that cost. One could consider instead \nu^{\log}\varepsilon, which is, among the world-models that \varepsilon-approximate \nu in less than \log k computation steps (if the set is non-empty), the first such world-model found by a searching procedure \psi. The description length of \nu^{\log}\varepsilon is within a (slightly larger) constant of the description length of \nu, but the one-time computational cost is less than that of \nu^{\log}. \nu^{\log}, \nu^{\log}_\varepsilon, and a host of other approaches are prominently represented in the speed prior. If this is what you call "the speed prior doing reasoning," so be it, but the relevance for that terminology only comes in when you claim that "once you’ve encoded ‘doing reasoning’, you’ve basically already written the code for it to do the treachery that naturally comes along with that." That sense of "reasoning" really only applies, I think, to the case where our code is simulating aliens or an AGI.
Comment
(ETA: I think this discussion depended on a detail of your version of the speed prior that I misunderstood.)
Comment
True but I’m arguing that this computable algorithm is just the alien itself, trying to answer the question "how can I better predict this richer world in order to take it over?" If there is no shorter/faster algorithm that can come up with a sequence of increasingly better algorithms, what is the point of saying that the alien is sampling from the speed prior, instead of saying that the alien is thinking about how to answer "how can I better predict this richer world in order to take it over?" Actually if this alien was sampling from the speed prior, then it would no longer be the shortest/fastest algorithm to come up with a sequence of increasingly better algorithms, and some other alien trying to take over our world would have the highest posterior instead.
Comment
I’m having a hard time following this. Can you expand on this, without using "sequence of increasingly better algorithms"? I keep translating that to "algorithm."
Comment thread: concerns with "the box"
Comment
Just obvious and mundane concerns: You might want to make clearer that "As long as the door is closed, information cannot leave the room" isn’t an assumption but a requirement of the setup. I.e. that you’re not assuming based on your description that opening the door is the only means for an operator to get information out; you’re assuming every other means of information escape has been systematically accounted for and ruled out (with the assumption that the operator has been compromised by the AI).
Responding to this comment here.
Comment
This is pretty thorough from an engineering standpoint. I still would not at all trust such a box to hold a superintelligent AI. The unknown unknowns remain too big a factor. Just because an unknown information channel does not have any humans listening to it directly, does not mean that it can’t be used to push around matter, or couple to some other channel, or otherwise leave an opening. Also, the human and their clothes are definitely not the only potentially-useful tools in the box: there’s also the computer, the computer’s power supply, whatever’s cooling the room, whatever’s keeping the air fresh, whatever’s used for input, and whatever’s used for output. If any of those things were outside the room, then they would offer a side channel for information to escape. They can be buried in concrete, but they’ll still need to be within the vacuums and whatnot. In general, trying to outsmart a superintelligent AI is a Bad Idea, and building an AI-proof box definitely falls in that category. If an AI is not safe with arbitrarily large amounts of power, then it is not safe, full stop.
Comment
The computer and everything is in the inner concrete wall (separated from the glass box by a vacuum), as is the power supply. Nothing is cooling is the room, except maybe some ice on the floor. I think you could allow an oxygen tank in the room too. Yes, the computer is using power, but a computer can’t move matter except inside itself. This can generate sound and light, but the second vacuum chamber and Faraday cage will block that (and the rate at which power is drawn can be capped, which can also give a reasonable bound on how much noise to generate.
Comment
We cannot "prove" that something is physically impossible, only that it is impossible under some model of physics. Normally that distinction would be entirely irrelevant, but when dealing with a superintelligent AI, it’s quite likely to understand the physics better than we do. For all we know, it may turn out that Alcubierre drives are possible, and if so then the AI could definitely break out that way and would have an incentive to do so. I agree that the AI is not really boxed here; it’s the "myopia" that makes the difference. But one of two things should generally be true:
The AI doesn’t want to get out of the box, in which case the box doesn’t need to be secure in the first place.
The AI cannot get out of the box, in which case the AI doesn’t need to be safe (but also won’t be very useful). This case seems like the former, so long as hacking the human is easier than getting out of the box. But that means we don’t need to make the box perfect anyway.
Comment
Whoops—when I said
I have never seen anyone point out that another’s thoughts were wrong, because they were too abstract, and that they were harmful to the general audience. I have seen three comments advocating for a specific model of human values, which I have never seen anybody; but at the moment I have not seen anyone anywhere in that context anywhere.
This isn’t because it is wrong, but because it doesn’t really sound like a person who would care, even if the AI were not going to see him do his work.
This is to me, the more compelling argument in terms of What if "AIs" might end up being the type that can decide whether to take over, then there isn’t a reasonable way for AIs to have any conscious thoughts.
The idea that AGI is coming soon isn’t obviously right. It looks like we already are. I don’t want to live in a world with lots of AIs over, not enough to make them "free" and not yet understand the basic principles of utility.
I can’t see how you can say that such a scenario is impossible, since the AI would simply be a kind of computer. However, this argument depends on your definition of AI as a "mind with 1" (a mind of a single type).
Just a minor nitpick: I think the basic rules of this sort of situation are good. I have a friend who does some pretty cool stuff with a simple set of rules, but it’s rare to get to a point where each rule is clearly valid.
One thing I would like to add is that while it’s definitely worth the paper I’ll write the paper, it’s not easy to find a better way to produce them. An example would be writing an open letter to one of the fellows: someone who was very excited about giving up the $100 could get it from them.
Comment thread: concerns with Assumption 4
Comment
Still wrapping my head around the paper, but...
Comment
Also, it’s worth noting that this assumption (or rather, Lemma 3) also seems to preclude BoMAI optimizing anything other than revealed preferences (which others have noted seems problematic, although I think it’s definitely out of scope).
Comment
I don’t understand what you mean by a revealed preference. If you mean "that which is rewarded," then it seems pretty straightforward to me that a reinforcement learner can’t optimize anything other than that which is rewarded (in the limit).
Comment
Yes, that’s basically what I mean. I think I’m trying to refer to the same issue that Paul mentioned here: https://www.lesswrong.com/posts/pZhDWxDmwzuSwLjou/asymptotically-benign-agi#ZWtTvMdL8zS9kLpfu
Comment
That’s why I said the "right" thing to do if you asked about cryonics "I will give you something to deny. I’ll create a perfect reality and you will be cured afterward."
Comment thread: concerns with Assumption 3
Comment
I’m calling this the "no grue assumption" (https://en.wikipedia.org/wiki/New_riddle_of_induction). My concern here is that this assumption might be False, even in a strong sense of "There is no such U".
Have you proven the existence of such a U? Do you agree it might not exist? It strikes me as potentially running up against issues of NFL / self-reference.
Comment
That’s a good name for the assumption. Well, any Turing machine/computable function can be described in English (perhaps quite arduously), so consider the universal Turing machine which converts the binary description to English, and then uses that description to identify the Turing machine to simulate. This UTM certainly satisfies this assumption.
Wei Dai’s response to this comment represents a concern with Assumption 3.
Comment thread: concerns with Assumption 2
Comment
Let the set of potential human explorer actions be A_E, and the best human explorer action be a^∗_E with reward r^∗_E. Consider the following world model. When asked to predict the result of an action a, it simulates it to find the predicted observation o and reward r. If a \in A_E, it outputs o and r faithfully. Otherwise, it outputs any reward it chooses, as long as for the action that it assigns the maximum reward to, it reports r faithfully. In practice, this means that the world model can get BoMAI to choose any action it wants, as long as it is at least as good as the human explorer’s best action. This is bad if the world model has malicious inner optimizers. I believe that Assumption 2 is the one responsible for ruling out a model of this form. It seems probably reasonable to say that for actions where it continues simulating the outside world there’s useless computation. However, it can also save computation relative to \mu^: while \mu^ must predict o and r perfectly for all actions a, this model can immediately output a null observation and zero reward for any a \notin A_E that it knows will not align with its goals, rather than spending computation to simulate what rewards those actions would get. Another way of thinking about this is that this model uses consequentialist general intelligence to quickly prune away uninteresting non-human actions to save on computation, but that general intelligence comes at the price of misaligned goals + deceptive behavior. Also, from this comment:
Comment
This is an interesting world-model.
Actually, I’m not sure if the world model I described is memory-based. EDIT: Never mind, see Michael’s comment below, the non-benign ones are memory-based. The rewards it outputs are correct, except when it says "the reward is zero", but those exceptions are not causally dependent on outside-world features that causally depend on the actions of the episode. But it also satisfies Lemma 3. So in that case it seems like none of the theoretical arguments prohibit this world model? (I think this is because you are thinking of how the world model can cause BoMAI to have bad incentives, whereas here I’m postulating that the world model itself has malicious goals and is manipulating BoMAI.)
[Quite possibly I’m confused, but in case I’m not:]I think this assumption might be invalid (or perhaps require more hand-waving than is ideal).The AI has an incentive to understand the operator’s mind, since this bears directly on its reward.Better understanding the operator’s mind might be achieved in part by running simulations including the operator.One specific simulation would involve simulating the operator’s environment and actions after he leaves the room.Here this isn’t done to understand the implications of his actions (which can’t affect the episode); it’s done to better understand his mind (which can).In this way, one branch of forget/not-forget has two useful purposes (better understand mind and simulate future), while the other has one (better understand mind). So a malign memory-based model needn’t be slower than a benign model, if it’s useful for that benign model to simulate the future too.So either I’m confused, or the justification for the assumption isn’t valid. Hopefully the former :). If I’m right, then what you seem to need is an assumption that simulating the outside-world’s future can’t be helpful in the AI’s prediction of its reward. To me, this seems like major hand-waving territory.
Comment
I wouldn’t really use the term "incentives" to describe the free-for-all among world-models as they compete to be maximum a posteriori. All they have to do is output observations and rewards in a distribution that matches the objective probabilities. But I think we arrive at the same possibility: you’ll see in the algorithm for \nu^\star that it does simulate the outside-world. I do acknowledge in the paper that some of the outside-world simulation that a memory-based world-model does when it’s following the "wrong path" may turn out to be useful; all that is required for the argument to go through is that this simulation is not *perfectly *useful—there is a shorter computation that accomplishes the same thing. I would love it if this assumption could look like: "the quickest way to simulate one counterfactual does not include simulating a mutually exclusive counterfactual" and make assumption 2 into a lemma that follows from it, but I couldn’t figure out how to formalize this.
Comment
Ah yes—I was confusing myself at some point between forming and using a model (hence "incentives"). I think you’re correct that "perfectly useful" isn’t going to happen. I’m happy to be wrong.
This doesn’t seem to address what I view as the heart of Joe’s comment. Quoting from the paper: "Now we note that µ* is the fastest world-model for on-policy prediction, and it does not simulate post-episode events until it has read access to the random action". It seems like simulating post-episode events in particular would be useful for predicting the human’s responses, because they will be simulating post-episode events when they choose their actions. Intuitively, it seems like we need to simulate post-episode events to have any hope of guessing how the human will act. I guess the obvious response is that we can instead simulate the internal workings of the human in detail, and thus uncover their simulation of post-episode events (as a past event). That seems correct, but also a bit troubling (again, probably just for "revealed preferences" reasons, though). Moreover, I think in practice we’ll want to use models that make good, but not perfect, predictions. That means that we trade-off accuracy with description length, and I think this makes modeling the outside world (instead of the human’s model of it) potentially more appealing, at least in some cases.
Comment
Comment
So it seems like you have a theory that could collapse the human value system into an (mostly non-moral) "moral value system" (or, as Eliezer would put it, "the moral value system")
(Note that I am not asserting that the moral value system (or the human metaethics) is necessarily stable—or that there’s a good and bad reason for not to value things in the first place.)
A few background observations:
A very few "real world" situations would be relevant here.
As an example, the following possible worlds are very interesting but I will focus on a couple:
The micro class and the macro class seem fairly different at first glance.
There is a very different class of micro-worlds available from a relatively small amount of resources.
The following world hypothetical would be clearly very different from the usual, and that looks very different than there’s a vastly smaller class of micro-worlds available to the same amount of resources.
At first I assumed that they were entirely plausible worlds. Then I assumed they were plausible to me.
Then I assumed there’s an overall level of plausibility that different people really do but have the same probability mass and the same amount of energy/effort.
The above causal leap isn’t that much of an argument.
The following examples, taken from Eliezer:
(It seems like Eliezer’s assumption of an "intended life", in the sense of a non-extended life, is simply not true)
These seem to be completely reasonable and reasonably frequent enough that I’m reasonably sure they’re reasonable.
"In a world that never presents itself, there is no reason for this to be a problem."
(A quick check of self-reference and how that’s not what it’s about seem relevant, though this sounds to me like a strawman.)
Comment thread: adding to the prize pool If you would like to contribute, please comment with the amount. If you have venmo, please send the amount to @Michael-Cohen-45. If not, we can discuss.
Comment thread: minor concerns
Comment
Just exposition-wise, I’d front-load pi^H and pi^* when you define pi^B, and also clarify then that pi^B considers human-exploration as part of it’s policy.
″ This result is independently interesting as one solution to the problem of safe exploration with limited oversight in nonergodic environments, which [Amodei et al., 2016] discus " ^ This wasn’t super clear to me.… maybe it should just be moved somewhere else in the text? I’m not sure what you’re saying is interesting here. I guess it’s the same thing I found interesting, which is that you can get sufficient (and safe-as-a-human) exploration using the human-does-the-exploration scheme you propose. Is that what you mean to refer to?
Comment
Yeah that’s what I mean to refer to: this is a system which learns everything it needs to from the human while querying her less and less, which makes human-lead exploration viable from a capabilities standpoint. Do you think that clarification would make things clearer?
ETA: NVM, what you said is more descriptive (I just looked in the appendix). RE footnote 2: maybe you want to say "monotonically increasing as a function of" rather than "proportional to". (It’s a shame there doesn’t seem to be a shorter way of saying the first one, which seems to be more often what people actually want to say...)
Comment
Maybe "promotional of" would be a good phrase for this.
Is this where typos go?
Comment
Typo: some of the hover-boxes say nu but seem to be referring to the letter mu.
Comment
Thank you, I’ll have to clarify that. For now, \nu is a general world-model, and \mu is a specific one, so in the hover text, I explain the notation with a general case. But I see how that’s confusing.
Yes, but this is also for things that seem like mistakes in the exposition, but either have simple fixes or don’t impact the main theorems.