[Question] Mathematical Models of Progress?

https://www.lesswrong.com/posts/ysQEJ8tvm8KYc76D5/mathematical-models-of-progress

I would be interested in collecting a bunch of examples of mathematical modeling of progress. I think there are probably several of these here, but I don’t expect to be able to find all of them myself. I’m also interested to know about any models like this elsewhere. I was reading the LessWrong 2018 books, and the following posts stuck out to me:

Comment

https://www.lesswrong.com/posts/ysQEJ8tvm8KYc76D5/mathematical-models-of-progress?commentId=6bnbb7kEbvo3uHvnq

I made an attempt to model intelligence explosion dynamics in this post, by attempting to make the very oversimplified exponential-returns-to-exponentially-increasing-intelligence model used by Bostrom and Yudkowsky slightly less oversimplified.

This post tries to build on a simplified mathematical model of takeoff which was first put forward by Eliezer Yudkowsky and then refined by Bostrom in Superintelligence, modifying it to account for the different assumptions behind continuous, fast progress as opposed to discontinuous progress. As far as I can tell, few people have touched these sorts of simple models since the early 2010’s, and no-one has tried to formalize how newer notions of continuous takeoff fit into them. I find that it is surprisingly easy to accommodate continuous progress and that the results are intuitive and fit with what has already been said qualitatively about continuous progress. The page includes python code for the model. This post doesn’t capture all the views of takeoff—in particular it doesn’t capture the non-hyperbolic faster growth mode scenario, where marginal intelligence improvements are exponentially increasingly difficult and therefore we get a (continuous or discontinuous switch to a) new exponential growth mode rather than runaway hyperbolic growth. But I think that by modifying the f(I) function that determines how RSI capability varies with intelligence we can incorporate such views. *(In the context of the exponential model given in the post that would correspond to an f(I) function where * f(I)=\dfrac{1}{I(1+e^{-d(I(t)-I_{AGI})})}which would result in a continuous (determined by size of d) switch to a single faster exponential growth mode) But I think the model still roughly captures the intuition behind scenarios that involve either a continuous or a discontinuous step to an intelligence explosion.

https://www.lesswrong.com/posts/ysQEJ8tvm8KYc76D5/mathematical-models-of-progress?commentId=5ih87LaQ4tSnyzvph

Artificial Intelligence and Economic Growth, by Chad Jones et al for a particular model, Economic growth under transformative AI for a comprehensive review.

Comment

https://www.lesswrong.com/posts/ysQEJ8tvm8KYc76D5/mathematical-models-of-progress?commentId=HFnJAfeZyqAik6bbF

Ah, excellent! I looked at the first link. This seems nice in that it (1) attempts to treat AI as continuous with earlier forms of automation, meaning the models can be meaningfully checked and fine-tuned based on historical trends, and (2) uses the same kind of simple mathematical model I’m looking at.

https://www.lesswrong.com/posts/ysQEJ8tvm8KYc76D5/mathematical-models-of-progress?commentId=hyFcxocXpMnfRHak6

Takeoff Speed: Simple Asymptotics in a Toy Model.

This post examines simple models of recursive self-improvement, where intelligence is the derivative of knowledge, but intelligence is also some function of knowledge (since knowledge can be applied to improve intelligence). It concludes that growth in intelligence is sublinear so long as returns from knowledge diminish faster than \sqrt x; subexponential so long as returns are diminishing at all; exponential precisely when returns are linear; and superexponential (having a singularity at finite time) if returns increase like some polynomial. In a comment there, I argue that it makes more sense to think in terms of the growth in capabilities, rather than the growth in intelligence; making that shift, it seems like almost any assumption gives you superlinear growth, but the crossover to superexponential is still at the same spot.

Modeling the Human Trajectory

This seems like a wonderful exploration of more sophisticated versions of the model discussed in 1960: the year the singularity was cancelled. A quick glance suggests that it doesn’t make the modification I was interested in exploring, but I have not read it thoroughly yet.

https://www.lesswrong.com/posts/ysQEJ8tvm8KYc76D5/mathematical-models-of-progress?commentId=9KoXT65HvFWcBXB9o

Artificial sentience is a technological project very comparable to the Manhattan project, though. Prior to reaching critical mass it’s not doing anything at all. Once you reach critical mass—in this case, useful AI agents that are general purpose—you can use the first systems to let you build the one that explodes.
The thing is that it’s all or nothing. AI agents that don’t produce more value than their cost have negative gain. We have all sorts of crummy agents today of questionable utility. (agents that try to spot fraud or machinery failure or other difficult to solve regression problems). Once you get to positive gain, you need an AI system sophisticated enough to self improve from it’s own output. Before you hook in the last piece of such a system, nothing happens, and we don’t know exactly when this will start to work.
This is fundamentally difficult to model. If you plotted human caused fission events per year, you would see a line near zero, suddenly increasing in 1942 with the Chicago pile, then going vertical and needing a logarithmic scale with the 1945 Los Alamos test.
Progress had been made, thousands of little things, to reach this point, but it wasn’t really certain until the first blinding flash and mushroom cloud that this overall effort was really going to work. There could have been all kinds of hidden laws of nature that would have prevented a fission device from working. Similarly, there are plenty of people (often seemingly to protect their own sense of importance or well being) who believe some hidden law of nature will prevent an artificial sentience from really working.

Comment

https://www.lesswrong.com/posts/ysQEJ8tvm8KYc76D5/mathematical-models-of-progress?commentId=wCkAzcdCLQEsCxbyW

I agree with the dangers of modeling progress in this way. I’m just curious how well we can build the model, and what it would predict. Fur a specific sort of person, these mathematical models are more convincing than detailed explanations of why the future might go specific ways. And it seems to me that there is some low hanging fruit around improving these sorts of models.

https://www.lesswrong.com/posts/ysQEJ8tvm8KYc76D5/mathematical-models-of-progress?commentId=Le4BhjQXmZiKwApsP

Interesting hyperbolic model here: https://​​www.researchgate.net/​​publication/​​325664983_The_21_st_Century_Singularity_and_its_Big_History_Implications_A_re-analysis