I’m looking to get oriented in the space of "AI policy": interventions that involve world governments (particularly the US government) and existential risk from strong AI. When I hear people talk about "AI policy", my initial reaction is skepticism, because (so far) I can think of very few actions that governments could take that seem to help with the core problems of AI ex-risk. However, I haven’t read much about this area, and I don’t know what actual policy recommendations people have in mind. So what should I read to start? Can people link to plans and proposals in AI policy space? Research papers, general interest web pages, and one’s own models, are all admissible. Thanks.
I’ll post the obvious resources: 80k’s US AI Policy article Future of Life Institute’s summaries of AI policy resources AI Governance: A Research Agenda (Allan Dafoe, FHI) Allen Dafoe’s research compilation: Probably just the AI section is relevant, some overlap with FLI’s list. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (2018). Brundage and Avin et al.: One of the earlier "large collaboration" papers I can recall, probably only the AI Politics and AI Ideal Governance sections are relevant for you. Policy Desiderata for Superintelligent AI: A Vector Field Approach: Far from object-level, in Bostrom’s style, but tries to be thorough in what AI policy should try to accomplish at a high level. CSET’s Reports: Very new AI policy org, but pretty exciting as it’s led by the former head of IARPA so their recommendations probably have a higher chance of being implemented than the academic think tank reference class. Their work so far focuses on documenting China’s developments and US policy recommendations, e.g. making US immigration more favorable for AI talent. Published documents can trail the thinking of leaders at orgs by quite a lot. You might be better off emailing someone at the relevant orgs (CSET, GovAI, etc.) with your goals, what you plan to read, and seeing what they would recommend so you can catch up more quickly.
Comment
The "obvious resources" are just what I want. Thanks.
Also, this 80,000 Hours podcast episode with Allan Dafoe.
What? I feel like I must be misunderstanding, because it seems like there are broad categories of things that governments can do that are helpful, even if you’re only worried about the risk of an AI optimizing against you. I guess I’ll just list some, and you can tell me why none of these work:
Funding safety research
Building aligned AIs themselves
Creating laws that prevent races to the bottom between companies (e.g. "no AI with >X compute may be deployed without first conducting a comprehensive review of the chance of the AI adversarially optimizing against humanity")
Monitoring AI systems (e.g. "we will create a board of AI investigators; everyone making powerful AI systems must be evaluated once a year") I don’t think there’s a concrete plan that I would want a government to start on today, but I’d be surprised if there weren’t such plans in the future when we know more (both from more research, and the AI risk problem is clearer). You can also look at the papers under the category "AI strategy and policy" in the Alignment Newsletter database.
Comment
As I said, I haven’t oriented on this subject yet, and I’m talking from my intuition, and I might be about to say stupid things. (And I might think different things on further thought. I think, I 60% to 75% "buy" the arguments that I make here.] I expect we have very different worldviews about this area, so I’m first going to lay out a general argument, which is intended to give context, and then respond to your specific points. Please let me know if anything I say seems crazy or obviously wrong. **General Argument ** My intuition says that in general, governments can only be helpful after the core, hard problems of alignment have been solved. After that point, there isn’t much for them to do, and before that point, I think they’re much more likely to cause harm, for the sorts of reasons I outline in this comment. (There is an argument that EAs should go into policy because the default trajectory involves governments interfering in the development of powerful AI, and having EAs in the mix is apt to make that interference smaller and saner. I’m sympathetic to that, if that’s the plan.) To say it more specifically: governments are much stupider than people, and can only do sane, useful things if there is a very clear, legible, common knowledge standard for which things are good and which things are bad.
Governments are not competent to do things like assess which technical research is promising. Especially not in fields that are as new and confusing as AI safety, where the experts themselves disagree about which approaches are promising. But my impression is that governments are mostly not even competent to do much more basic assessment of things like "which kinds of batteries for electric cars, seem promising to invest in? (Or even physically plausible?)"
There do appear to be some exceptions to this. DARPA and IARPA seem to well designed for solving some kinds of important engineering problems, via a mechanism that spawns many projects and culls most of them. I bet DARPA could make progress on AI alignment if there were clear, legible targets to try and hit.
Similarly governments can constrain the behavior of other actors via law, but this only seems useful if it is very clear what standards they should be enforcing. If legislatures freak out about the danger of AI, and then come up with the best compromise solution they can, for making sure "no one does anything dangerous" (from a partial, at best, understanding of the technical details), I expect this to be harmful on net, because it inserts semi-random obstacles in the way of technical experts on the ground trying to solve the problem. . . . There are only two situations in which I can foresee policy having a major impact, a non-extreme story, and an extreme story. The first, non-extreme story is when all of the following conditions hold...
Funding safety research This is only any use at all if governments can easily identify tractable research programs that actually contribute to AI safety, instead of have "AI safety" as a cool tagline. I guess that you imagine that that will be the case in the future? Or maybe you think that it doesn’t matter if they fund a bunch of terrible, pointless research if some "real" research also gets funded?
Building aligned AI themselves? What? It seems like this is only possible if the technical problem is solved and known to be solved. At that point, the problem is solved
Creating laws that prevent races to the bottom between companies (e.g. "no AI with >X compute may be deployed without first conducting a comprehensive review of the chance of the AI adversarially optimizing against humanity") Again, if there are existing, legible standards of what’s safe and what isn’t this seems good. But without such standards I don’t know how this helps? It seems like most of what makes this work is inside of the "comprehensive review"? If our civilization knows how to do that well, then having the government insist on it seems good, but if we don’t know how to do that well, then this looks like security theater.
Monitoring AI systems (e.g. "we will create a board of AI investigators; everyone making powerful AI systems must be evaluated once a year") This has the same issue as above. [Overall, I something like 60% to 75% believe the arguments that I outline in this comment.] (Some) cruxes:
[Partial] We are going to have clear, legible, standards for aligning AI systems.
We’re going to be in scenario 1 or scenario 2 that I outlined above.
For some other reason, we will have some verified pieces of alignment technology, but AI employers won’t use that technology by default
Maybe because tech companies are much more reckless or near-sighted than I’m imagining?
Governments are much more competent than I currently believe, or will become much more competent before the endgame.
EAs are planning to go into policy to try to make the governmental reaction smaller and saner, rather than try to push the government into positive initiatives, and the EAs are well-coordinated about this.
In a local takeoff scenario, the leading team is not concerned about alignment or is basically not cosmopolitan in its values.
Comment
We’re going to have clear, legible things that ensure safety (which might be "never build systems of this type").
Governments are much more competent than you currently believe (I don’t know what you believe, but probably I think they are more competent than you do)
We have so little evidence / argument so far, that just the model uncertainty means that we can’t conclude "it is unimportant to think about how we could use the resources of the most powerful actors in the world".