Relevant pre-AGI possibilities

https://www.lesswrong.com/posts/zjhZpZi76kEBRnjiw/relevant-pre-agi-possibilities

Link post Contents

Interactive "Generate Future" button

Asya Bergal and I made an interactive button to go with the list. The button randomly generates a possible future according to probabilities that you choose. It is very crude, but it has been fun to play with, and perhaps even slightly useful. For example, once I decided that my credences were probably systematically too high because the futures generated with them were too crazy. Another time I used the alternate method (described below) to recursively generate a detailed future trajectory, written up here. I hope to make more trajectories like this in the future, since I think this method is less biased than the usual method for imagining detailed futures.6 To choose probabilities, scroll down to the list below and fill each box with a number representing how likely you think the entry is to occur in a strategically relevant way prior to the advent of advanced AI. (1 means certainly, 0 means certainly not. The boxes are all 0 by default.) Once you are done, scroll back up and click the button. A major limitation is that the button doesn’t take correlations between possibilities into account. The user needs to do this themselves, e.g. by redoing any generated future that seems silly, or by flipping a coin to choose between two generated possibilities that seem contradictory, or by choosing between them based on what else was generated. Here is an alternate way to use this button that mostly avoids this limitation:

Key

Letters after list titles indicate that I think the change might be relevant to:

List of strategically relevant possibilities

Inputs to AI****1. Advanced science automation and research tools (TML, TAS, CHA, MIS) Narrow research and development tools might speed up technological progress in general or in specific domains. For example, several of the other technologies on this list might be achieved with the help of narrow research and development tools. 2. Dramatically improved computing hardware (TML, TAS, POL, MIS) By this I mean computing hardware improves at least as fast as Moore’s Law. Computing hardware has historically become steadily cheaper, though it is unclear whether this trend will continue. Some example pathways by which hardware might improve at least moderately include:

Notes

(Edited to add text)

Comment

https://www.lesswrong.com/posts/zjhZpZi76kEBRnjiw/relevant-pre-agi-possibilities?commentId=7Zzx9ryuLHSuA8DQz

On the point about ‘Deterioration of collective epistemology’, and how it might interact with an impending risk, we have some recent evidence in the form of the Coronavirus response. It’s important to note the Sleepwalk bias/​Morituri Nolumus Mori effect’s potential role here—the way I conceptualised it, sufficiently terrible collective epistemology can vitiate any advantage you might expect from the MNM effect/​discounting sleepwalk bias, but it has to be so bad that *current *danger is somehow rendered invisible. In other words, the MNM effect says the quality of our collective epistemology and how bad the danger is aren’t independent—we can get slightly smarter in some relevant ways if the stakes go up, though there do appear to be some levels of impaired collective epistemology it is hard to recover from even for high stakes—if the information about risk is effectively or actually inaccessible we don’t respond to it. On the other hand, the MNM effect requires leaders and individuals to have access to information about the state of the world > right now (i.e. how dangerous are things at the moment). Even in countries with reasonably free flow of information this is not a given. If you accept Eliezer Yudkowksy’s thesis that clickbait has impaired our ability to understand a persistent, objective external world then you might be more pessimistic about the MNM effect going forward. Perhaps for this reason, we should expect countries with higher social trust, and therefore more ability for individuals to agree on a consensus reality and understand the level of danger posed, to perform better. Japan and the countries in Northern Europe like Denmark and Sweden come to mind, and all of them have performed better than the mitigation measures employed by their governments would suggest.

https://www.lesswrong.com/posts/zjhZpZi76kEBRnjiw/relevant-pre-agi-possibilities?commentId=jbBrmztJ8jBKDSf4S

A major factor that I did not see on the list is the rate of progress on algorithms, and closely related formal understanding, of deep AI systems. Right now these algorithms can be surprisingly effective (alpha-zero, GPT-3) but are extremely compute intensive and often sample inefficient. Lacking any comprehensive formal models of why deep learning works as well as it does, and why it fails when it does, we are groping toward better systems. Right now the incentives favor scaling compute power to get more marquee results, since finding more efficient algorithms doesn’t scale as well with increased money. However the effort to make deep learning more efficient continues and probably can give us multiple orders of magnitude increase in both compute and sample efficiency. Orders of magnitude improvement in the algorithms would be consistent with our experience in many other areas of computing where speedups due to better algorithms have often beaten speedups due to hardware. Note that this is (more or less) independent of advances that contribute directly to AGI. For example algorithmic improvements may let us train GPT-3 on 100 times less data, with 1000 times less compute work, but may not suggest how to make the GPT series fundamentally smarter /​ more capable, except by making it bigger.

Comment

https://www.lesswrong.com/posts/zjhZpZi76kEBRnjiw/relevant-pre-agi-possibilities?commentId=WiB9S7rcGrZ96hDgJ

GPT-3 is very sample-efficient. You can put in just a few examples, and it’ll learn a new task, much like a human would!

Oh, did you mean, sample-inefficient in training data? Yeah, I suppose, but I don’t see why anyone particularly cares about that.

https://www.lesswrong.com/posts/zjhZpZi76kEBRnjiw/relevant-pre-agi-possibilities?commentId=rwanrT4uoCtPucgsY

Hmm, interesting point. I had considered things like "New insights accelerate AI development" but I didn’t put them in because they seemed too closely intertwined with AI timelines. But yeah now that you mention it I think it deserves to be included. Will add!l

https://www.lesswrong.com/posts/zjhZpZi76kEBRnjiw/relevant-pre-agi-possibilities?commentId=R78GYtdEvmqsty3Th

Yet almost everyone agrees the world will likely be importantly different by the time advanced AGI arrives. Why do you think this? My default assumption is generally that the world won’t be super different from how it looks today in strategically relevant ways. (Maybe it will be, but I don’t see a strong reason to assume that, though I strongly endorse thinking about big possible changes!)

Comment

https://www.lesswrong.com/posts/zjhZpZi76kEBRnjiw/relevant-pre-agi-possibilities?commentId=pcdTbnjoeN4fEYfWJ

Maybe I was overconfident here. I was generalizing from the sample of people I’d talked to. Also, as you’ll see by reading the entries on the list, I have a somewhat low bar for strategic relevance.

https://www.lesswrong.com/posts/zjhZpZi76kEBRnjiw/relevant-pre-agi-possibilities?commentId=AH6Nf3HLXgvBhaBbN

Note that the problem with understanding the behavior of C. Elegans is not understanding the neurons, it is understanding the connections that are outside of the neutrons. From a New York Times article ( https://​www.nytimes.com/​2011/​06/​21/​science/​21brain.html ): "Why is the wiring diagram produced by Dr. White so hard to interpret? She pulls down from her shelves a dog-eared copy of the journal in which the wiring was first described. The diagram shows the electrical connections that each of the 302 neurons makes to others in the system. These are the same kind of connections as those made by human neurons. But worms have another kind of connection. Besides the synapses that mediate electrical signals, there are also so-called gap junctions that allow direct chemical communication between neurons. The wiring diagram for the gap junctions is quite different from that of the synapses. Not only does the worm’s connectome, as Dr. Bargmann calls it, have two separate wiring diagrams superimposed on each other, but there is a third system that keeps rewiring the wiring diagrams. This is based on neuropeptides, hormonelike chemicals that are released by neurons to affect other neurons." Humans are slowly making progress in understanding how C. Elegans works, see for example: Parallel Multimodal Circuits Control an Innate Foraging Behavior https://​​www.cell.com/​​neuron/​​fulltext/​​S0896-6273(19)30080-7