**Editor’s note: **The following is a lightly edited copy of a document written by Eliezer Yudkowsky in November 2017. Since this is a snapshot of Eliezer’s thinking at a specific time, we’ve sprinkled reminders throughout that this is from 2017. A background note: It’s often the case that people are slow to abandon obsolete playbooks in response to a novel challenge. And AGI is certainly a very novel challenge. Italian general Luigi Cadorna offers a memorable historical example. In the Isonzo Offensive of World War I, Cadorna lost hundreds of thousands of men in futile frontal assaults against enemy trenches defended by barbed wire and machine guns. As morale plummeted and desertions became epidemic, Cadorna began executing his own soldiers en masse, in an attempt to cure the rest of their "cowardice." The offensive continued for 2.5 years. Cadorna made many mistakes, but foremost among them was his refusal to recognize that this war was fundamentally unlike those that had come before. Modern weaponry had forced a paradigm shift, and Cadorna’s instincts were not merely miscalibrated—they were systematically broken. No number of small, incremental updates within his obsolete framework would be sufficient to meet the new challenge. Other examples of this type of mistake include the initial response of the record industry to iTunes and streaming; or, more seriously, the response of most Western governments to COVID-19.
As usual, the real challenge of reference class forecasting is figuring out which reference class the thing you’re trying to model belongs to. For most problems, rethinking your approach from the ground up is wasteful and unnecessary, because most problems have a similar causal structure to a large number of past cases. When the problem isn’t commensurate with existing strategies, as in the case of AGI, you need a new playbook. Contents
-
Trustworthy command
-
Closure
-
Opsec
-
Common good commitment
-
Alignment mindset
-
Resources
-
Further Remarks I’ve sometimes been known to complain, or in a polite way scream in utter terror, that "there is no good guy group in AGI", i.e., if a researcher on this Earth currently wishes to contribute to the common good, there are literally zero projects they can join and no project close to being joinable. In its present version, this document is an informal response to an AI researcher who asked me to list out the qualities of such a "good project". In summary, a "good project" needs:
-
Trustworthy command: A trustworthy chain of command with respect to both legal and pragmatic control of the intellectual property (IP) of such a project; a running AGI being included as "IP" in this sense.
-
Research closure: The organizational ability to close and/or silo IP to within a trustworthy section and prevent its release by sheer default.
-
Strong opsec: Operational security adequate to prevent the proliferation of code (or other information sufficient to recreate code within e.g. 1 year) due to e.g. Russian intelligence agencies grabbing the code.
-
*Common good commitment: *The project’s command and its people must have a credible commitment to both short-term and long-term goodness. Short-term goodness comprises the immediate welfare of present-day Earth; long-term goodness is the achievement of transhumanist astronomical goods.
-
*Alignment mindset: *Somebody on the project needs deep enough security mindset plus understanding of AI cognition that they can originate new, deep measures to ensure AGI alignment; and they must be in a position of technical control or otherwise have effectively unlimited political capital. Everybody on the project needs to understand and expect that aligning an AGI will be terrifically difficult and terribly dangerous.
-
*Requisite resource levels: *The project must have adequate resources to compete at the frontier of AGI development, including whatever mix of computational resources, intellectual labor, and closed insights are required to produce a 1+ year lead over less cautious competing projects. I was asked what would constitute "minimal, adequate, and good" performance on each of these dimensions. I tend to divide things sharply into "not adequate" and "adequate" but will try to answer in the spirit of the question nonetheless.
Trustworthy command
Token: Not having pragmatic and legal power in the hands of people who are opposed to the very idea of trying to align AGI, or who want an AGI in every household, or who are otherwise allergic to the easy parts of AGI strategy. E.g.: Larry Page begins with the correct view that cosmopolitan values are good, speciesism is bad, it would be wrong to mistreat sentient beings just because they’re implemented in silicon instead of carbon, and so on. But he then proceeds to reject the idea that goals and capabilities are orthogonal, that instrumental strategies are convergent, and that value is complex and fragile. As a consequence, he expects AGI to automatically be friendly, and is liable to object to any effort to align AI as an attempt to keep AI "chained up". Or, e.g.: As of December 2015, Elon Musk not only wasn’t on board with closure, but apparently wanted to* open-source* superhumanly capable AI. Elon Musk is not in his own person a majority of OpenAI’s Board, but if he can pragmatically sway a majority of that Board then this measure is not being fulfilled even to a token degree. (Update: Elon Musk stepped down from the OpenAI Board in February 2018.) **Improving: ** There’s a legal contract which says that the Board doesn’t control the IP and that the alignment-aware research silo does. Adequate: The entire command structure including all members of the finally governing Board are fully aware of the difficulty and danger of alignment. The Board will not object if the technical leadership have disk-erasure measures ready in case the Board suddenly decides to try to open-source the AI anyway. Excellent: Somehow no local authority poses a risk of stepping in and undoing any safety measures, etc. I have no idea what incremental steps could be taken in this direction that would not make things worse. If e.g. the government of Iceland suddenly understood how serious things had gotten and granted sanction and security to a project, that would fit this description, but I think that trying to arrange anything like this would probably make things worse globally because of the mindset it promoted.
Closure
**Token: ** It’s generally understood organizationally that some people want to keep code, architecture, and some ideas a ‘secret’ from outsiders, and everyone on the project is okay with this even if they disagree. In principle people aren’t being pressed to publish their interesting discoveries if they are obviously capabilities-laden; in practice, somebody always says "but someone else will probably publish a similar idea 6 months later" and acts suspicious of the hubris involved in thinking otherwise, but it remains possible to get away with not publishing at moderate personal cost. **Improving: ** A subset of people on the project understand why some code, architecture, lessons learned, et cetera must be kept from reaching the general ML community if success is to have a probability significantly greater than zero (because tradeoffs between alignment and capabilities make the challenge unwinnable if there isn’t a project with a reasonable-length lead time). These people have formed a closed silo within the project, with the sanction and acceptance of the project leadership. It’s socially okay to be conservative about what counts as potentially capabilities-laden thinking, and it’s understood that worrying about this is not a boastful act of pride or a trick to get out of needing to write papers. Adequate: Everyone on the project understands and agrees with closure. Information is siloed whenever not everyone on the project needs to know it. Reminder: This is a 2017 document.## Opsec Token: Random people are not allowed to wander through the building. Improving: Your little brother cannot steal the IP. Stuff is encrypted. Siloed project members sign NDAs. **Adequate: ** Major governments cannot silently and unnoticeably steal the IP without a nonroutine effort. All project members undergo government-security-clearance-style screening. AGI code is not running on AWS, but in an airgapped server room. There are cleared security guards in the server room. Excellent: Military-grade or national-security-grade security. (It’s hard to see how attempts to get this could avoid being counterproductive, considering the difficulty of obtaining trustworthy command and common good commitment with respect to any entity that can deploy such force, and the effect that trying would have on general mindsets.)
Common good commitment
**Token: ** Project members and the chain of command are not openly talking about how dictatorship is great so long as they get to be the dictator. The project is not directly answerable to Trump or Putin. They say vague handwavy things about how of course one ought to promote democracy and apple pie (applause) and that everyone ought to get some share of the pot o’ gold (applause). **Improving: ** Project members and their chain of command have come out explicitly in favor of being nice to people and eventually building a nice intergalactic civilization. They would release a cancer cure if they had it, their state of deployment permitting, and they don’t seem likely to oppose incremental steps toward a postbiological future and the eventual realization of most of the real value at stake. Adequate: Project members and their chain of command have an explicit commitment to something like coherent extrapolated volition as a long-run goal, AGI tech permitting, and otherwise the careful preservation of values and sentient rights through any pathway of intelligence enhancement. In the short run, they would not do everything that seems to them like a good idea, and would first prioritize not destroying humanity or wounding its spirit with their own hands. (E.g., if Google or Facebook consistently thought like this, they would have become concerned a lot earlier about social media degrading cognition.) Real actual moral humility with policy consequences is a thing.
Alignment mindset
Token: At least some people in command sort of vaguely understand that AIs don’t just automatically do whatever the alpha male in charge of the organization wants to have happen. They’ve hired some people who are at least pretending to work on that in a technical way, not just "ethicists" to talk about trolley problems and which monkeys should get the tasty banana. Improving: The technical work output by the "safety" group is neither obvious nor wrong. People in command have ordinary paranoia about AIs. They expect alignment to be somewhat difficult and to take some extra effort. They understand that not everything they might like to do, with the first AGI ever built, is equally safe to attempt. Adequate: The project has realized that building an AGI is mostly about aligning it. Someone with full security mindset and deep understanding of AGI cognition as cognition has proven themselves able to originate new deep alignment measures, and is acting as technical lead with effectively unlimited political capital within the organization to make sure the job actually gets done. Everyone expects alignment to be terrifically hard and terribly dangerous and full of invisible bullets whose shadow you have to see before the bullet comes close enough to hit you. They understand that alignment severely constrains architecture and that capability often trades off against transparency. The organization is targeting the minimal AGI doing the least dangerous cognitive work that is required to prevent the next AGI project from destroying the world. The alignment assumptions have been reduced into non-goal-valent statements, have been clearly written down, and are being monitored for their actual truth. Alignment mindset is *fundamentally *difficult to obtain for a project because Graham’s Design Paradox applies. People with only ordinary paranoia may not be able to distinguish the next step up in depth of cognition, and happy innocents cannot distinguish useful paranoia from suits making empty statements about risk and safety. They also tend not to realize what they’re missing. This means that there is a horrifically strong default that when you persuade one more research-rich person or organization or government to start a new project, that project *will *have inadequate alignment mindset unless something extra-ordinary happens. I’ll be frank and say relative to the present world I think this essentially has to go through trusting me or Nate Soares to actually work, although see below about Paul Christiano. The lack of clear person-independent instructions for how somebody low in this dimension can improve along this dimension is why the difficulty of this dimension is the real killer. If you insisted on trying this the impossible way, I’d advise that you start by talking to a brilliant computer security researcher rather than a brilliant machine learning researcher.
Resources
Token: The project has a combination of funding, good researchers, and computing power which makes it credible as a beacon to which interested philanthropists can add more funding and other good researchers interested in aligned AGI can join. E.g., OpenAI would qualify as this if it were adequate on the other 5 dimensions. Improving: The project has size and quality researchers on the level of say Facebook’s AI lab, and can credibly compete among the almost-but-not-quite biggest players. When they focus their attention on an unusual goal, they can get it done 1+ years ahead of the general field so long as Demis doesn’t decide to do it first. I expect e.g. the NSA would have this level of "resources" if they started playing now but didn’t grow any further. Adequate: The project can get things done with a 2-year lead time on anyone else, and it’s not obvious that competitors could catch up even if they focused attention there. DeepMind has a great mass of superior people and unshared tools, and is the obvious candidate for achieving adequacy on this dimension; though they would still need adequacy on other dimensions, and more closure in order to conserve and build up advantages. As I understand it, an adequate resource advantage is explicitly what Demis was trying to achieve, before Elon blew it up, started an openness fad and an arms race, and probably got us all killed. Anyone else trying to be adequate on this dimension would need to pull ahead of DeepMind, merge with DeepMind, or talk Demis into closing more research and putting less effort into unalignable AGI paths. Excellent: There’s a single major project which a substantial section of the research community understands to be The Good Project that good people join, with competition to it deemed unwise and unbeneficial to the public good. This Good Project is at least adequate along all the other dimensions. Its major competitors lack either equivalent funding or equivalent talent and insight. Relative to the present world it would be extremely difficult to make any project like this exist with adequately trustworthy command and alignment mindset, and failed attempts to make it exist run the risk of creating still worse competitors developing unaligned AGI. Unrealistic:* *There is a single global Manhattan Project which is somehow not answerable to non-common-good command such as Trump or Putin or the United Nations Security Council. It has orders of magnitude more computing power and smart-researcher-labor than anyone else. Something keeps other AGI projects from arising and trying to race with the giant project. The project can freely choose transparency in all transparency-capability tradeoffs and take an extra 10+ years to ensure alignment. The project is at least adequate along all other dimensions. This is how our distant, surviving cousins are doing it in their Everett branches that diverged centuries earlier towards more competent civilizational equilibria. You cannot possibly cause such a project to exist with adequately trustworthy command, alignment mindset, and common-good commitment, and you should therefore not try to make it exist, first because you will simply create a still more dire competitor developing unaligned AGI, and second because if such an AGI could be aligned it would be a hell of an s-risk given the probable command structure. People who are slipping sideways in reality fantasize about being able to do this. Reminder: This is a 2017 document.## Further Remarks A project with "adequate" closure and a project with "improving" closure will, if joined, aggregate into a project with "improving" (aka: inadequate) closure where the closed section is a silo within an open organization. Similar remarks apply along other dimensions. The aggregate of a project with NDAs, and a project with deeper employee screening, is a combined project with some unscreened people in the building and hence "improving" opsec. "Adequacy" on the dimensions of closure and opsec is based around my mainline-probability scenario where you unavoidably need to spend at least 1 year in a regime where the AGI is not yet alignable on a minimal act that ensures nobody else will destroy the world shortly thereafter, but during that year it’s possible to remove a bunch of safeties from the code, shift transparency-capability tradeoffs to favor capability instead, ramp up to full throttle, and immediately destroy the world. During this time period, leakage of the code to the wider world automatically results in the world being turned into paperclips. Leakage of the code to multiple major actors such as commercial espionage groups or state intelligence agencies seems to me to stand an extremely good chance of destroying the world because at least one such state actor’s command will not reprise the alignment debate correctly and each of them will fear the others. I would also expect that, if key ideas and architectural lessons-learned were to leak from an insufficiently closed project that would otherwise have actually developed alignable AGI, it would be possible to use 10% as much labor to implement a non-alignable world-destroying AGI in a shorter timeframe. The project must be closed tightly or everything ends up as paperclips. "Adequacy" on common good commitment is based on my model wherein the first task-directed AGI continues to operate in a regime far below that of a real superintelligence, where many tradeoffs have been made for transparency over capability and this greatly constrains self-modification. This task-directed AGI is not able to defend against true superintelligent attack. It cannot monitor other AGI projects in an unobtrusive way that grants those other AGI projects a lot of independent freedom to do task-AGI-ish things so long as they don’t create an unrestricted superintelligence. The designers of the first task-directed AGI are barely able to operate it in a regime where the AGI doesn’t create an unaligned superintelligence inside itself or its environment. Safe operation of the original AGI requires a continuing major effort at supervision. The level of safety monitoring of other AGI projects required would be so great that, if the original operators deemed it good that more things be done with AGI powers, it would be far simpler and safer to do them as additional tasks running on the original task-directed AGI. *Therefore: *Everything to do with invocation of superhuman specialized general intelligence, like superhuman science and engineering, continues to have a single effective veto point. This is also true in less extreme scenarios where AGI powers can proliferate, but must be very tightly monitored, because no aligned AGI can defend against an unconstrained superintelligence if one is deliberately or accidentally created by taking off too many safeties. Either way, there is a central veto authority that continues to actively monitor and has the power to prevent anyone else from doing anything potentially world-destroying with AGI. This in turn means that any use of AGI powers along the lines of uploading humans, trying to do human intelligence enhancement, or building a cleaner and more stable AGI to run a CEV, would be subject to the explicit veto of the command structure operating the first task-directed AGI. If this command structure does not favor something like CEV, or vetoes transhumanist outcomes from a transparent CEV, or doesn’t allow intelligence enhancement, et cetera, then all future astronomical value can be permanently lost and even s-risks may apply. A universe in which 99.9% of the sapient beings have no civil rights because way back on Earth somebody decided or voted that emulations weren’t real people, is a universe plausibly much worse than paperclips. (I would see as self-defeating any argument from democratic legitimacy that ends with almost all sapient beings not being able to vote.) If DeepMind closed to the silo level, put on adequate opsec, somehow gained alignment mindset within the silo, and allowed trustworthy command of that silo, then in my guesstimation it *might *be possible to save the Earth (we would start to leave the floor of the logistic success curve). OpenAI seems to me to be further behind than DeepMind along multiple dimensions. OAI is doing significantly better "safety" research, but it is all still inapplicable to serious AGI, AFAIK, even if it’s not fake / obvious. I do not think that either OpenAI or DeepMind are out of the basement on the logistic success curve for the alignment-mindset dimension. It’s not clear to me from where I sit that the miracle required to grant OpenAI a chance at alignment success is easier than the miracle required to grant DeepMind a chance at alignment success. If Greg Brockman or other decisionmakers at OpenAI are not totally insensible, neither is Demis Hassabis. Both OAI and DeepMind have significant metric distance to cross on Common Good Commitment; this dimension is relatively easier to max out, but it’s not maxed out just by having commanders vaguely nodding along or publishing a mission statement about moral humility, nor by a fragile political balance with some morally humble commanders and some morally nonhumble ones. If I had a ton of money and I wanted to get a serious contender for saving the Earth out of OpenAI, I’d probably start by taking however many OpenAI researchers could pass screening and refounding a separate organization out of them, then using that as the foundation for further recruiting. I have never seen anyone except Paul Christiano try what I would consider to be deep macro alignment work. E.g. if you look at Paul’s AGI scheme there is a global alignment story with assumptions that can be broken down, and the idea of exact human imitation is a deep one rather than a shallow defense—although I don’t think the assumptions have been broken down far enough; but nobody else knows they even ought to be trying to do anything like that. I also think Paul’s AGI scheme is orders-of-magnitude too costly and has chicken-and-egg alignment problems. But I wouldn’t totally rule out a project with Paul in technical command, because I would hold out hope that Paul could follow along with someone else’s deep security analysis and understand it in-paradigm even if it wasn’t his own paradigm; that Paul would suggest useful improvements and hold the global macro picture to a standard of completeness; and that Paul would take seriously how bad it would be to violate an alignment assumption even if it wasn’t an assumption within his native paradigm. Nobody else except myself and Paul is currently in the arena of comparison. If we were both working on the same project it would still have unnervingly few people like that. I think we should try to get more people like this from the pool of brilliant young computer security researchers, not just the pool of machine learning researchers. Maybe that’ll fail just as badly, but I want to see it tried. I doubt that it is possible to produce a written scheme for alignment, or any other kind of fixed advice, that can be handed off to a brilliant programmer with ordinary paranoia and allow them to actually succeed. Some of the deep ideas are going to turn out to be wrong, inapplicable, or just plain missing. Somebody is going to have to notice the unfixable deep problems in advance of an actual blowup, and come up with new deep ideas and not just patches, as the project goes on. Reminder: This is a 2017 document.
Thanks for sharing this publicly! Candid transparency is very difficult.
It’s also nice to see an organization making public something that it originally felt it couldn’t, now that some time has passed!
With Eliezer’s assent, I hit the Publish button for this post, and included an editor’s note that I co-wrote with Duncan Sabien.
Comment
May I ask why you guys decided to publish this now in particular? Totally fine if you can’t answer that question, of course.
Comment
It’s been high on some MIRI staff’s "list of things we want to release" over the years, but we repeatedly failed to make a revised/rewritten version of the draft we were happy with. So I proposed that we release a relatively unedited version of Eliezer’s original draft, and Eliezer said he was okay with that (provided we sprinkle the "Reminder: This is a 2017 document" notes throughout). We’re generally making a push to share a lot of our models (expect more posts soon-ish), because we’re less confident about what the best object-level path is to ensuring the long-term future is awesome, so (as I described in April) we’ve "updated a lot toward existential wins being likelier if the larger community moves toward having much more candid and honest conversations, and generally produces more people who are thinking exceptionally clearly about the problem". I think this was always plausible to some degree, but it’s grown in probability; and model-sharing is competing against fewer high-value uses of Eliezer and Nate’s time now that they aren’t focusing their own current efforts on alignment research.
Comment
Should I update and be encouraged or discouraged by this?
Comment
Discouraged. Eliezer and Nate feel that their past alignment research efforts failed, and they don’t currently know of a new research direction that feels promising enough that they want to focus their own time on advancing it, or make it MIRI’s organizational focus. I do think ‘trying to directly solve the alignment problem’ is the most useful thing the world can be doing right now, even if it’s not Eliezer or Nate’s comparative advantage right now. A good way to end up with a research direction EY or Nate are excited by, IMO, is for hundreds of people to try hundreds of different angles of attack and see if any bear fruit. Then a big chunk of the field can pivot to whichever niche approach bore the most fruit. From MIRI’s perspective, the hard part is that:
(a) we don’t know in advance which directions will bear fruit, so we need a bunch of people to go make unlikely bets so we can find out;
(b) there currently aren’t that many people trying to solve the alignment problem at all; and
(c) nearly all of the people trying to solve the problem full-time are adopting unrealistic optimistic assumptions about things like ‘will alignment generalize as well as capabilities?’ and ‘will the first pivotal AI systems be safe-by-default?’, in such a way that their research can’t be useful if we’re in the mainline-probability world. What I’d like to see instead is more alignment research, and especially research of the form "this particular direction seems unlikely to succeed, but if it succeeds then it will in fact help a lot in mainline reality", as opposed to directions that (say) seem a bit likelier to succeed but won’t actually help in the mainline world. (In principle you want nonzero effort going into both approaches, but right now the field is almost entirely in the second camp, from MIRI’s perspective. And making a habit of assuming your way out of mainline reality is risky business, and outright dooms your research once you start freely making multiple such assumptions.)
Comment
Fiscal limits notwithstanding, doesn’t this suggest MIRI should try hiring a lot more maybe B-tier researchers?
Comment
Quoting a thing I said in March:
(1) people who can generate promising new alignment ideas. (By far the top priority, but seems empirically rare.)
(2) competent executives who are unusually good at understanding the kinds of things MIRI is trying to do, and who can run their own large alignment projects mostly-independently. For 2, I think the best way to get hired by MIRI is to prove your abilities via the Visible Thoughts Project. The post there says a bit more about the kind of skills we’re looking for:
With a few examples, this comment would make a useful post in its own right.
Comment
Nate is writing that post. :)
Comment
Quoting from the "strategic background" summary we shared in 2017:
Comment
Replying to your points with that in mind, Mitchell:
Comment
I want to register my skepticism about this claim. Whereas it might naively seem that "put a strawberry on a plate" is easier to align than "extrapolated volition", on a closer look there are reasons why it might be the other way around. Specifically, the notion of "utility function of given agent" is a natural concept that we should expect to have a relatively succinct mathematical description. This intuition is supported by the AIT definition of intelligence. On the other hand, "put a strawberry on a plate without undesirable side effects" is not a natural concept, since a lot of complexity is packed into the "undesirable side effects". Therefore, while I see some lines of attack on both task AGI and extrapolated volition, the latter might well turn out easier.
Comment
And if humans had a utility function and we knew what that utility function was, we would not need CEV. Unfortunately extracting human preferences over out-of-distribution options and outcomes at dangerously high intelligence, using data gathered at safe levels of intelligence and a correspondingly narrower range of outcomes and options, when there exists no sensory ground truth about what humans want because human raters can be fooled or disassembled, seems pretty complicated. There is ultimately a rescuable truth about what we want, and CEV is my lengthy informal attempt at stating what that even is; but I would assess it as much, much, much more difficult than ‘corrigibility’ to train into a dangerously intelligent system using only training and data from safe levels of intelligence. (As is the central lethally difficult challenge of AGI alignment.) If we were paperclip maximizers and knew what paperclips were, then yes, it would be easier to just build an offshoot paperclip maximizer.
Comment
I agree that it’s a tricky problem, but I think it’s probably tractable. The way PreDCA tries to deal with these difficulties is:
The AI can tell that, even before the AI was turned on, the physical universe was running certain programs.
Some of those programs are "agentic" programs.
Agentic programs have approximately well-defined utility functions.
Disassembling the humans doesn’t change anything, since it doesn’t affect the programs that were already running[1] before the AI was turned on.
Since we’re looking at agent-programs rather than specific agent-actions, there is much more ground for inference about novel situations.
Obviously, the concepts I’m using here (e.g. which programs are "running" or which programs are "agentic") are non-trivial to define, but infra-Bayesian physicalism does allow us the define them (not without some caveats, but hopefully at least to a 1st approximation).
Yeah, I’m very interested in hearing counter-arguments to claims like this. I’ll say that although I think task AGI is easier, it’s not necessarily strictly easier, for the reason you mentioned. Maybe a cruxier way of putting my claim is: Maybe corrigibility / task AGI / etc. is harder than CEV, but it just doesn’t seem realistic to me to try to achieve full, up-and-running CEV with the very first AGI systems you build, within a few months or a few years of humanity figuring out how to build AGI at all. And I do think you need to get CEV up and running within a few months or a few years, if you want to both (1) avoid someone else destroying the world first, and (2) not use a "strawberry-aligned" AGI to prevent 1 from happening. All of the options are to some extent a gamble, but corrigibility, task AGI, limited impact, etc. strike me as gambles that could actually realistically work out well for humanity even under extreme time pressure to deploy a system within a year or two of ‘we figure out how to build AGI’. I don’t think CEV is possible under that constraint. (And rushing CEV and getting it only 95% correct poses far larger s-risks than rushing low-impact non-operator-modeling strawberry AGI and getting it only 95% correct.)
Comment
Insofar as humans care about their AI being corrigible, we should expect some degree of corrigibility even from a CEV-maximizer. That, in turn, suggests at least some basin-of-attraction for values (at least along some dimensions), in the same way that corrigibility yields a basin-of-attraction. (Though obviously that’s not an argument we’d want to make load-bearing without both theoretical and empirical evidence about how big the basin-of-attraction is along which dimensions.) Conversely, it doesn’t seem realistic to define limited impact or corrigibility or whatever without relying on an awful lot of values information—like e.g. what sort of changes-to-the-world we do/don’t care about, what thing-in-the-environment the system is supposed to be corrigible with, etc. Values seem like a necessary-and-sufficient component. Corrigibility/task architecture/etc doesn’t.
Comment
Comment
Comment
I agree that this would be scary if the system is, for example, as smart as physically possible. What I’m imagining is:
(1) if you find a way to ensure that the system is only weakly superhuman (e.g., it performs vast amounts of low-level-Google-engineer-quality reasoning, only rare short controlled bursts of von-Neumann-quality reasoning, and nothing dramatically above the von-Neumann level), and
(2) if you get the system to only care about thinking about this cube of space, and
(3) if you also somehow get the system to want to build the particular machine you care about, then you can plausibly save the world, and (importantly) you’re not likely to destroy the world if you fail, assuming you really are correctly confident in 1, 2, and 3. I think you can also get more safety margin if the cube is in Antarctica (or on the Moon?), if you’ve tried to seal it off from the environment to some degree, and if you actively monitor for things like toxic waste products, etc. Notably, the "only care about thinking about this cube of space" part is important for a lot of the other safety features to work, like:
It’s a lot harder to get guarantees about the system’s intelligence if it’s optimizing the larger world (since it might then improve itself, or build a smart successor in its environment—good luck closing off all possible loopholes for what kinds of physical systems an AGI might build that count as "smart successors", while still leaving it able to build nanotech!).
Likewise, it’s a lot harder to get guarantees that the system stably is optimizing what you want it to optimize, or stably has any specific internal property, if it’s willing and able to modify itself.
Part of why we can hope to notice, anticipate, and guard against bad side-effects like "waste products" is that the waste products aren’t optimized to have any particular effect on the external environment, and aren’t optimized to evade our efforts to notice, anticipate, or respond to the danger. For that reason, "An AGI that only terminally cares about the state of a certain cube of space, but does spend time thinking about the larger world", is vastly scarier than an AGI that just-doesn’t-think in those directions.
If the system does start going off the rails, we’re a lot more likely to be able to shut it down if it isn’t thinking about us or about itself. This makes me think that the "only care about thinking about certain things" part may be relatively important in order for a lot of other safety requirements to be tractable. It feels more "(realistically) necessary" than "sufficient" to me; but I do personally have a hunch (which hopefully we wouldn’t have to actually rely on as a safety assumption!) that the ability to do things in this reference class would get us, like, 80+% of the way to saving the world? (Dunno whether Eliezer or anyone else at MIRI would agree.)
The way I imagine the win scenario is, we’re going to make a lot of progress in understanding alignment before we know how to build AGI. And, we’re going to do it by prioritizing understanding alignment modulo capability (the two are not really possible to cleanly separate, but it might be possible to separate them enough for this purpose). For example, we can assume the existence of algorithms with certain properties, s.t. these properties arguably imply the algorithms can be used as building-blocks for AGI, and then ask: given such algorithms, how would we build aligned AGI? Or, we can come up with some toy setting where we already know how to build "AGI" in some sense, and ask, how to make it aligned in that setting? And then, once we know how to build AGI in the real world, it would hopefully not be too difficult to translate the alignment method.
One caveat in all this is, if AGI is going to use deep learning, we might not know how to apply the lesson from the "oracle"/toy setting, because we don’t understand what deep learning is actually doing, and because of that, we wouldn’t be sure where to "slot" it in the correspondence/analogy s.t. the alignment method remains sound. But, mainstream researchers have been making progress on understanding what deep learning is actually doing, and IMO it’s plausible we will have a good mathematical handle on it before AGI.
I’m not sure whether you mean "95% correct CEV has a lot of S-risk" or "95% correct CEV has a little S-risk, and even a tiny amount of S-risk is terrifying"? I think I agree with the latter but not with the former. (How specifically does 95% CEV produce S-risk? I can imagine something like "AI realizes we want non-zero amount of pain/suffering to exist, somehow miscalibrates the amount and creates a lot of pain/suffering" or "AI realizes we don’t want to die, and focuses on this goal on the expense of everything else, preserving us forever in a state of complete sensory deprivation". But these scenarios don’t seem very likely?)
Comment
Thank you for the long reply. The 2017 document postulates an "acute risk period" in which people don’t know how to align, and then a "stable period" once alignment theory is mature. So if I’m getting the gist of things, rather than focus outright on the creation of a human-friendly superhuman AI, MIRI decided to focus on developing a more general theory and practice of alignment; and then once alignment theory is sufficiently mature and correct, one can focus on applying that theory to the specific crucial case, of aligning superhuman AI with extrapolated human volition. But what’s happened is that we’re racing towards superhuman AI while the general theory of alignment is still crude, and this is a failure for the strategy of prioritizing general theory of alignment over the specific task of CEV. Is that vaguely what happened?
Comment
Comment
The "stable period" is supposed to be a period in which AGI already exists, but nothing like CEV has yet been implemented, and yet "no one can destroy the world with AGI". How would that work? How do you prevent everyone in the whole wide world from developing unsafe AGI during the stable period?
Comment
Use strawberry alignment to melt all the computing clusters containing more than 4 GPUs. (Not actually the best thing to do with strawberry alignment, IMO, but anything you can do here is outside the Overton Window, so I picked something of which I could say "Oh but I wouldn’t actually do that" if pressed.)
I think there are multiple viable options, like the toy example EY uses:
Comment
Comment
if no one does A, then all humans die and the future’s entire value is lost.
by comparison, it doesn’t matter much to anyone who does A; everyone stands to personally gain or lose a lot based on whether A is done, but they accrue similar value regardless of which actor does A. (Because there are vastly more than enough resources in the universe for everyone. The notion that this is a zero-sum conflict to grab a scarce pot of gold is calibrated to a very different world than the "ASI exists" world.)
doing A doesn’t necessarily mean that your idiosyncratic values will play a larger role in shaping the long-term future than anyone else’s, and in fact you’re bought into a specific plan aimed at preventing this outcome. (Because CEV, no-pot-of-gold, etc.) I do think there are serious risks and moral hazards associated with a transition to that state of affairs. (I think this regardless of whether it’s a government or a private actor or an intergovernmental collaboration or whatever that’s running the task AGI.) But I think it’s better for humanity to try to tackle those risks and moral hazards, than for humanity to just give up and die? And I haven’t heard a plausible-sounding plan for what humanity ought to do instead of addressing AGI proliferation somehow.
Realistically, there’s a strong chance (I would say: overwhelmingly strong) that we won’t be able to fully solve CEV before AGI arrives. So since our options in that case will be "strawberry-grade alignment, or just roll over and die", let’s start by working on strawberry-grade alignment. Once we solve that problem, sure, we can shift resources into CEV. If you’re optimistic about ‘rush to CEV’, then IMO you should be even more optimistic that we can nail down strawberry alignment fast, at which point we should have made a lot of headway toward CEV alignment without gambling the whole future on our getting alignment perfect immediately and on the first try.
Likewise, realistically, there’s a strong chance (I would say overwhelming) that there will be some multi-year period where humanity can build AGI, but isn’t yet able to maximally align it. It would be good if we don’t just roll over and die in those worlds; so while we might hope for there to be no such period, we should make plans that are robust to such a period occurring. There’s nothing about the strawberry plan that requires waiting, if it’s not net-beneficial to do so. You can in fact execute a ‘no one else can destroy the world with AGI’ pivotal act, start working on CEV, and then surprise yourself with how fast CEV falls into place and just go implement that in relatively short order. What strawberry-ish actions do is give humanity the option of waiting. *I *think we’ll desperately need this option, but even if you disagree, I don’t think you should consider it net-negative to have the option available in the first place.
Comment
Comment
CEV seems much much more difficult than strawberry alignment and I have written it off as a potential option for a baby’s first try at constructing superintelligence. To be clear, I also expect that strawberry alignment is too hard for these babies and they’ll just die. But things can always be even more difficult, and with targeting CEV on a first try, it sure would be. There’s zero room, there is negative room, to give away to luxury targets like CEV. They’re not even going to be able to do strawberry alignment, and if by some miracle we were able to do strawberry alignment and so humanity survived, that miracle would not suffice to get CEV right on the first try.
I highly doubt anything in this universe could be ‘intrinsically’ safe and humane. I’m not sure what such an object would even look like or how we could verify that. Even purposefully designed safety systems for deep pocketed customers for very simple operations in a completely controlled environment, such as car manufacturer’s assembly lines with a robot arm assigned solely to tightening nuts, have safety systems that are not presumed to be 100% safe in any case. That’s why they have multiple layers of interlocks, panic buttons, etc. And that’s for something millions of times simpler than a superhuman agi in ideal conditions. Perhaps before even the strawberry, it would be interesting to consider the difficulty of the movable arm itself.
A useful distinction. Yet of the rare outcomes that follow current timelines without ending in ruin, I expect the most likely one falls into neither category. Instead it’s an AGI that behaves like a weird supersmart human that bootstrapped its humanity from language models with relatively little architecture support (for alignment), as a result of a theoretical miracle where things like that are the default outcome. Possibly from giving a language model autonomy/agency to debug its thinking while having notebooks and a working memory, tuning the model in the process. It’s not going to reliably do as it’s told, could be deceptive, yet possibly doesn’t turn everything into paperclips. Arguably it’s aligned, but only the way weird individual humans are aligned, which is noncentrally strawberry-aligned, and too-indirectly-to-use-the-term CEV-aligned.
I mean, you read the whole Death With Dignity thing, right?
Awesome! Looking forward to seeing what y’all come out with :)
I’m fairly allergic to secrecy. Having to keep things secret is a good way to end up with groupthink and echo chamber effects, and every additional person who knows and understands what you’re doing is another person that can potentially tell you if you’re about to do something stupid. I understand that sometimes it’s necessary but it generally makes things harder.
What are they doing now?
Comment
It would be a lot harder to bullshit to NSA leadership all day if, say, they were to oversee the invention of a really good lie detector, and then use their broad authority to hoard technology like that. Which, in addition to everything previously mentioned, is exactly the kind of thing that the NSA is built to do every day. In fact, the lie detector example would actually be even easier than putting uncompromised zero days in a particular brand of router, since that would be a very large, diverse, thin-spread system that probably has a ton of Chinese zero-days in the microchips, whereas a lie detector technology can easily be vetted (in realistic high-stakes environments that are actually real) as part of the everyday internal operations of the NSA leadership. These sorts of people were probably bugging their employees offices shortly after the invention of the bug.
Comment
The NSA has never done anything remotely close to engineering a truly working lie detector, ever. Also:
Comment
I said oversee, not engineer it themselves. If they have in-house programs developing lie detection technology themselves, I know nothing about that; but assuming that lie prediction technology is being improved upon anywhere, then the NSA is probably aware of it and the first government agency in line to take advantage of the latest generation of that kind of tech (maybe second or third, etc).
Basically everything I know about zero days comes from Chinese zero days. And it seems to me that if a Zero Day was developed in the US, it would be developed with planting in mind, because routers are fabricated and assembled in Asian countries with their own intelligence agencies and their own guiding principles of planting zero days on hardware.
My understanding about what the NSA is and isn’t enthusiastic about might be nearly a decade out of date, if it was ever accurate at all.
Comment
So, zero day exploits are generally nonrivalrous; one agency can find one way to run code remotely on a device and another agency in a different country can find a different flaw or even the same flaw, and they’re not conflicting scenarios. "Planting" bugs as the software or hardware product is being developed is an avenue to accomplish that, but it’s unnecessary, especially for items like routers. I have several friends who do this exact type of work developing zero days for a private military contractor instead of the government. They’re handed a piece of equipment, they find bugs, they develop a plug-and-play weaponization for someone else in the military-industial complex, rinse, repeat; there’s no need to bribe anyone or do some sort of infiltration of the vendor themselves. When the NSA does the same thing in-house it’s very similar. Injecting bugs during software development of a product is risky for multiple reasons, and if the vendor is American it could be illegal, even if the NSA is doing it. Otherwise I’m kind of confused as to what we’re talking about anymore.
Even with adequate closure and excellent opsec, there can still be risks related to researchers on the team quitting and then joining a competing effort or starting their own AGI company (and leveraging what they’ve learned).
We need to talk about fifty-five.
Comment
Spoiler Warning. Tried hiding it with >! and :::spoiler but neither seems to work. For those unaware, this is a story (worth reading) about anti-memes, ideas that cannot be spread, so researchers have a hard time working with them, not knowing they exist. So the point of the parent comment probably is that even if an adequate AGI project existed we wouldn’t know about it.
Comment
I don’t think any AGI projects that exist today are anywhere near adequacy on all of these dimensions. (And I don’t think we live in the sort of world where there could plausibly be a secret-to-me adequate project, right now in 2022.) I could imagine optimistic worlds where an adequate project exists in the future but I or others publicly glomarize about whether it’s adequate (e.g., because we don’t want to publicly talk about the state of the project’s opsec or something)? Mostly, though, I think we’d want to loudly and clearly broadcast ‘this is the Good Project, the Schelling AGI project to put your resources into if you put your resources into any AGI project’, because part of the point is to facilitate coordination. And my current personal guess is that ‘is this adequate?’ would be useful public information in most worlds where an adequate project like that existed. In a sense this is really basic stuff, and should be more like ‘first-order-obvious boxes to check’.
Comment
I don’t think the SCP foundation actually exists, and I would be almost HPJEV-level surprised to learn that an analogous secret adequate AGI project existed. "We need to talk about fifty-five" is just a very well written, fictional story about a team of scientists trying to pacify "doomsday technology" within that universe that I think people here would enjoy reading. I also agree with you that everything you said is obviously true, and I probably mistakenly implied something really dumb by commenting that here without context.
Comment
FWIW I enjoyed the story when you posted it, and didn’t assume you were trying to make a specific new point with it. :)
I don’t actually necessarily think that, it’s just an interesting story. I feel like there are parallels somewhere, even if I don’t know where they are.
Comment
If it’s anything like real government clearances, lots of paperwork, interviews of friends/family, polygraphs (which contrary to popular belief do get some people), simple analysis of travel records, etc. etc. etc. They tend to be less about detecting past "bad behavior" than detecting lies about bad behavior, which imply you might be lying about the other things. Honestly we could do better.
Comment
I ask because neither the Closure nor Ops-Sec section talk much about how to actually teach and/or vet the skill of confidentiality, and I wasn’t sure if government-security-clearance-style-screening actually included that.
Comment
I’m afraid the clearances themselves won’t be much help for vetting something like that. Their biggest job is to filter against people likeliest to become deliberate spies. Mostly they do that by performing the much easier job of making sure someone isn’t thrill-seeking, is risk-intolerant, and is unlikely to break rules in general. But teaching the skill of confidentiality can be done; governments have been doing that job passably for decades. You can even test it effectively every once in a while by red-teaming your own guys. Tell Paul from section A he’ll get a bonus if he can get someone from section B to give him information that should be siloed there. Then see if section B manages to report Paul’s suspicious questions (yay!), if Paul fails but isn’t detected (meh), or if Paul actually succeeds at getting someone to reveal something confidential (oh no).
Comment
Yeah. I don’t have a strong sense that this is that hard, but I do think you need to be actually trying to succeed at it.