The Values-to-Actions Decision Chain

https://www.lesswrong.com/posts/gdQnhjstyFpPtRFnr/the-values-to-actions-decision-chain

Contents

Chain up

To illustrate how to use the V2ADC (see diagram below): Suppose an ethics professor decided that...

Cross the chasm

Imagine someone visiting an EA meetup for the first time. If the person stepping through the doorway was an academic (or LessWrong addict), they might be thrilled to see a bunch of nerdy people engaging in intellectual discussions. But an activist (or social entrepreneur) stepping in would feel more at home seeing agile project teams energetically typing away at their laptops. Right now, most local EA groups emphasise the former format, I suspect in part due to CEA’s deep engagement stance, and in part because it’s hard to find volunteer projects that have sufficient direct impact. If the academic and activist happened to strike up a conversation, both sides could have trouble seeing the value produced by the other, because each is stuck at an entirely different level of abstraction. An organiser could help solve this disconnect by gradually encouraging the activist to go meta (chunking up) and the academic to apply their brilliant insights to real-life problems (chunking down).
As a community builder, I’ve found that this chasm seems especially wide for newcomers. Although the chasm gets filled up over time, people who’ve been involved in the EA community for years still encounter it, with some emphasising work on epistemics and values, and others emphasising work to gather data and get things done. The middle area seems neglected (i.e. deciding on the problem, intervention, project, and workflow). Put simply, EAs tend to emphasise one of two categories:

Commit & correct

The common thread so far is that you need to link up work done at various construal levels in order to do more good. To make this more specific:Individuals who increase their impact the fastest tend to be

Integrate the low-level into the high-level

Most of this post has been about pushing values down into actions, which implies that people doing low-construal level work should merely follow instructions from above. Although it’s indeed useful for those people to use prioritisation advice to decide where to do work, they also fulfill the essential function of feeding back information that can be used to update overarching models. We face a major risk of ideological rust in our community. This is where people who are working out high-level decisions either don’t receive enough information from below or no longer respond to it. As a result, their models separate from reality and their prioritisation advice becomes misguided. To illustrate this… At a Strategies level, you find that much of AI alignment research is built on paradigms like ‘the intelligence explosion’ and ‘utility functions’ that arose from pioneering work done by the Future of Humanity Institute and Machine Intelligence Research Institute. Fortunately, leaders within the community are aware of the information cascades this can lead to, but the question remains whether they’re integrating insights on machine learning progress fast enough into their organisations’ strategies. At a Causes level, a significant proportion of the EA community champions work on AI safety. But then there’s the question: how many others are doing specialised research on the risks of pandemics, nanotechnology, and so on? And how much of this gets integrated into new cause rankings? At a Values level, it is crazy how one person’s worldview leads them to work on safeguarding the existence of future generations, another on preventing their suffering and another to work on neither. This reflects actual moral uncertainty – to build up a worldview, you basically need to integrate most of your life experiences into a workable model. Having philosophers and researchers explore diverse worldviews and exchange arguments is essential in ensuring that we don’t rust in our current conjecture. Now extend it to organisational structure: We should also use the principle of decentralised experimentation, exchange and integration of information more in how we structure EA organisations. There has been a tendency to concentrate resources (financial, human and social capital) within a few organisations like the Open Philanthropy Project and the Centre for Effective Altruism who then set the agenda for the rest of the community (i.e. push decisions down their chains). This seems somewhat misguided. Larger organisations do have less redundancy and can divide up tasks better internally. But a team of 24 staff members is still at a clear cognitive disadvantage at gathering and processing low-level data compared to a decentralised exchange between 1000 committed EAs. By themselves, they can’t zoom in closely on enough details to update their high-level decision models appropriately. In other words, concentrated decision-making leads to fragile decision-making – just as it has done for central planning. Granted, it is hard to find people you can trust to delegate work to. OpenPhil and CEA are making headway in allocating funding to specialised experts (e.g. OpenPhil’s allocation to CEA, which in turn allocated to EA Grants) and collaborating with organisations who gather and analyse more detailed data (e.g. CEA’s individual outreach team working with the Local Effective Altruism Network). My worry is that they’re not delegating enough. Given the uncertainty they are facing, most of OpenPhil’s charity recommendations and CEA’s community-building policies should be overturned or radically altered in the next few decades. That is, if they actually discover their mistakes. This means it’s crucial for them to encourage more people to do local, contained experiments and then integrate their results into more accurate models. EDIT: see these comments on where they could create better systems to facilitate this:

Comment

https://www.lesswrong.com/posts/gdQnhjstyFpPtRFnr/the-values-to-actions-decision-chain?commentId=w4EfQqH42mQKZfeyW

Very interesting! You’ve captured a big reason why I do not myself participate much in the EA/​Rationalist community, despite being a SSC addict. Much of the meetups seem to be around conversation, and I grow anxious in conversation-only settings. I want to do, and yes, sometimes that means I do the non-optimal thing, because I simply must express my doing energy somehow. If more meetups had promises of actions (as major as hackathons or as minor as results documentation), I might find myself participating more in the community. Alternatively, I could learn to be comfortable with the lack of action in settings. It would be a useful skill to allow me more time to reflect before taking action. I would like to grow that skill, but at the same time, my orientation towards action is very much a part of me, so I appreciate settings where that orientation can be utilized and beneficial.

https://www.lesswrong.com/posts/gdQnhjstyFpPtRFnr/the-values-to-actions-decision-chain?commentId=KuppBELHQTryRGFTN

I like the model and appreciate that you find some ways to apply it. To me this reads a lot like the foundations of a larger project you might have in mind. Care to share any of your thoughts about what you might do with this information?

Comment

https://www.lesswrong.com/posts/gdQnhjstyFpPtRFnr/the-values-to-actions-decision-chain?commentId=aWqMQryDmE2tsEw25

So, I do find it fascinating to analyse how multi-layered networks of agents interact and how those interactions can be improved to better reach goals together. My impression is also that it’s hard to make progress in (otherwise several simple coordination problems would already have been solved) and I lack expertise in network science, complexity science, multi-agent systems or microeconomics. I haven’t set out a clear direction but I do find your idea of making this into a larger project inspiring.

I’ll probably work on gathering more emperical data over time to overhaul any conclusions I came to in this article and gain a more fine-grained understanding how people interact in the EA community. When I happen to make some creative connections between concepts again, I’ll start writing those up. :-)

I think I’ll also write a case study in the next months that examines one possible implication of this model (e.g. local group engagement) in a more detailed, balanced way (for the strategic implications I wrote about in this post, I leant towards being concise and activating people to think about them instead of examining a bunch of separate data sources dispassionately).

Comment

Awesome! Part of what makes me ask is that this reminds me a lot of how I got moving on work I care about myself: I came up with a complex model to explain complex phenomena and then from there explored ideas that eventually lead me to having a unique perspective to bring to the AI alignment discussion. I didn’t know that was going to happen at the time, and I like you’re thoughts on future work since they were much like mine.

Looking forward to seeing where your thinking leads you!

Comment

Ah, I have the first diagram in your article as one of my desktop backgrounds. :-) It was a fascinating demonstration of how experiences can be built up into more complex frameworks (even though I feel I only half-understand it). It was one of several articles that inspired and moulded my thinking in this post. I’d value having a half-an-hour Skype chat with you some time. If you’re up for it, feel free to schedule one here.

https://www.lesswrong.com/posts/gdQnhjstyFpPtRFnr/the-values-to-actions-decision-chain?commentId=ryMmD7HS4hGaDNvY7

Thanks for mentioning this!

Let me think about your question for a while. Will come back on it later.