A Developmental Framework for Rationality

https://www.lesswrong.com/posts/5zpBBHWTYAnEo4LcG/a-developmental-framework-for-rationality

Contents

A] Techniques Rule:

You’re on a quest to find the One True Rationality. When you first encounter rationality, it’s in the context of cognitive biases, pesky glitches which lead us down suboptimal paths when we can clearly see a better way forward.
For example:

B] Building Automaticity:

The problem, you reason, isn’t just about beating biases, "Just do X instead of Y" isn’t a viable strategy if there isn’t a way to actually get yourself to do X at the right moments. But it seems quite difficult to be intentional all the time, to be able to reflectively say, "Ah, and now is the right time to cast my spell Premorta, which will show me an instance of the future where things have gone terribly wrong!" That just won’t do. Thus you decide to become a cyborg. What matters, after all, you reason, is just that the desired algorithm gets executed at the right time. You turn to habits, towards installing the aforementioned algorithms into yourself. There’s a subtle shift in focus that comes in. Whereas you previously thought about techniques in terms of nullifying your biases, the techniques themselves come into full focus here: It’s about diving deep into the question of what it "really" means to practice a rationality skill. Key assumptions here are that humans are stimulus-response machines, and that any rationality technique can be turned into a habit. One big insight at this stage is the distinction between declarative and procedural knowledge—it’s very possible for there to be an explanation of a rationality technique without giving any actual good information on how to implement said technique in real life. Things I’ve written which fall under this viewpoint:

C] Going Mental

You’re back here again. You had one look at the codebase, and it was a total mess. No documentation and a bunch of hard-coded variables. So, against all hope, you wonder if maybe you can gain some more insights by looking at exactly what’s going wrong with the installation process itself. No in-depth internals analysis needed, thank you very much. You turn your attention inwards, on the practice, rather than what’s being done behind the scenes. By paying attention to the sensations of engaging in rationality, though, you realize that focusing only on execution is futile: The phenomenon of learning skills is still a mental one, no matter how much you’d like to abstract away the messiness of the mind and focus on implementation. You can’t remove your mind from the equation. Things I’ve written which explore this foray into the necessity of the internal experience of learning rationality:

D] The Human Alignment Problem:

"Hello System 1, my old friend." You approach yourself tentatively. "I tried going against you, molding you to my will. Then I tried ignoring you entirely, hoping I could do well without you. But it looks like I’ll need your help after all. Turns out the real Rationality was inside me all along." You say nothing in response. (It’s yourself, after all.) "How are we feeling right now?" you ask. Which, of course, you already know the answer to. Anger. Aversion. Calm. Sorrow. Worry. Want. Pleasure. Joy. You have lots of feelings. And they’re all important.


Given the intrinsically mental portion of learning rationality, the direct approaches given by techniques and habits are missing a crucial component. Namely, there are situations where, despite your best efforts to set up a system to do X, something inside of you is resisting. The sensation of forcing yourself to do something is not a pleasant one. So instead of trying to bind the demon to your will, you fuse with the demon. Practically speaking, this means coming to terms with all of the intuitive, wordless, gut pieces of yourself. One component of this frame is letting go of some sense of direct control over yourself. You accept that motivation isn’t always something you can hack together with a formula. It’s not that you endorse a worldview where motivation is magical and irreducible, but just that, instrumentally speaking, there are ways of becoming driven which don’t involve thinking, "And now I must motivate myself to get X done!" What you gain in return is a great deal more self-trust. You no longer need to be the person that watches over your own shoulder, making sure that you get things done. It’s now much less about "forcing" yourself to do things because the things you’re doing are things you want to do anyway. Internal conflict is generally removed from the equation because you’re giving all of your different sides a voice. It’s not just that you’ve got a new technique that draws on a new way of getting techniques down. This is a viewpoint that isn’t about the techniques. Things I’ve written which are about this shift into feeling out your feelings:

E] Paradigmaster:

Having aligned yourself, you have a better measure of yourself. You know there are some things you can’t do. Or can you? Throughout your travels, you hear whispers. You hear whispers of other powers. Other powers which promise more than alignment:

Coda:

Those familiar with Kegan’s stages of development will likely see parallels here, especially as I myself was reading In Over Our Heads during the writing of this essay. I took care to try and paint this framework, not necessarily one where each stage is better, but as a set of growing considerations. I think the most useful insight I can offer here is where you start to see different rationality techniques emerge as a consequence of trying to solve different components (practical, mental, ontological, etc.) of the self-improvement problem.

Comment

https://www.lesswrong.com/posts/5zpBBHWTYAnEo4LcG/a-developmental-framework-for-rationality?commentId=5K5i4PGLYw7vXXbFY

I love this! I think I followed/​am following a similar progression, though I’d love to hear others experience to see if we’re just typical-minding ourselves. This sort of road-map to rationality seems like it would be very useful to one in the beginning stages of their journey. You have a very easy to read style.

https://www.lesswrong.com/posts/5zpBBHWTYAnEo4LcG/a-developmental-framework-for-rationality?commentId=cFXAXMQX76vptcqeH

If there were two selves. One roughly "pleasure seeking", animalistic, beast, system 1,etc.

The second, the knower, planner, thinker. System 2.

First you are animal alone. Then you want to plan something. The animal doesn’t like that. So you pretend it doesn’t exist and push through it. Then you are planner alone. But that hurts. So you go back with a carrot an stick. As the planner. And that works a bit. Until it doesn’t.

Then you go decide to try being the animal again. But that doesn’t work either..

Then you work out you are both. The non-dual congruent self.

https://www.lesswrong.com/posts/5zpBBHWTYAnEo4LcG/a-developmental-framework-for-rationality?commentId=Nn4EpGLCexNmpnaMS

Agreed that there’s something important about the idea of having a developmental framework here, especially with the corollary that different people at different stages may need different or even opposite advice. I don’t have a strong attachment to this one in particular but it seems pretty good. I think you can skip straight to starting on D without passing through A, B, or C, and that’s basically what I tried to get everyone to do at SPARC last year.

Comment

https://www.lesswrong.com/posts/5zpBBHWTYAnEo4LcG/a-developmental-framework-for-rationality?commentId=K3v2jPPDnJnMs28W2

Nice, that sounds really good! Any thoughts on how well that worked? (I agree that skipping around, moving back and forth, etc. is doable and I don’t want it to be cached that the only way to move forward is go sequentially or anything like that.)

Comment

I think I got a decent number of people to pay attention to and care about their feelings by the end, but I don’t know if they stayed that way after leaving SPARC.

https://www.lesswrong.com/posts/5zpBBHWTYAnEo4LcG/a-developmental-framework-for-rationality?commentId=bDj9RBdbQ7QHR6f6K

Really interesting. To provide a personal data point/​take, my journey started at a mix of C and D through personal introspection, reading HPMOR by happenstance one day which led me to lesswrong, which then got me started in A at the same time as I was taking a behavioral economics course. That let me re-approach C and D with a framework more in line with what the rationality community uses since everything was ad hoc before, and I’m just currently at the point of trying to implement the automaticity in B.

Comment

https://www.lesswrong.com/posts/5zpBBHWTYAnEo4LcG/a-developmental-framework-for-rationality?commentId=2vGnBiyhDfm6EdEfs

Yeah, one thing I made sure to not do in this essay was number the stages with numbers, which I thought might imply more of a hierarchical structure than what I think is accurate. It definitely feels like people bounce around a lot, and I definitely think people will re-approach different places with more insight, which allows you to better grok the models used in that stage.

https://www.lesswrong.com/posts/5zpBBHWTYAnEo4LcG/a-developmental-framework-for-rationality?commentId=82D2NYuNvoHDiCMS2

Oh wow. Yes. I hadn’t thought of it in these terms before, but this feels pretty much exactly like my journey so far. (Currently in the process of figuring out stage D.)

https://www.lesswrong.com/posts/5zpBBHWTYAnEo4LcG/a-developmental-framework-for-rationality?commentId=Sp6J6xxCBxscikuhW

This very much aligned with my personal experience—I’m currently somewhere in the middle of "Building Automaticity". I’ll need to take time to read and reflect on the more detailed posts, but as @Hazard says, this seems potentially useful to the beginner—in this case myself!