ARC is hiring!

https://www.lesswrong.com/posts/dLoK6KGcHAoudtwdo/arc-is-hiring

Contents

What is ARC?

ARC is a non-profit organization focused on theoretical research to align future machine learning systems with human interests. We are aiming to develop alignment strategies that would continue to work regardless of how far we scaled up ML or how ML models end up working internally. Probably the best way to understand our work is to read Eliciting Latent Knowledge, a report describing some recent and upcoming research, which illustrates our general methodology. We currently have 2 research staff (Paul Christiano and Mark Xu). We’re aiming to hire another 1-2 researchers in early 2022. ARC is a new organization and is hoping to grow significantly over the next few years, so early hires will play a key role in helping define and scale up our research.

Who should apply?

Most of all, you should send in an application if you feel excited about proposing the kinds of algorithms and counterexamples described in our report on ELK. We’re open to anyone who is excited about working on alignment even if you don’t yet have any research background (or your research is in another field). You may be an especially good fit if you:

Hiring process and details

You can apply by filling out this short form. We will begin reviewing applications and interviewing candidates on January 3, 2022. Our hiring process involves a series of 1-2 hours interviews followed by a paid day-long work sample. Where possible we also prefer to do a longer trial although we understand that’s not practical for everyone. We are based in Berkeley, CA and would prefer people who can work from our office, but we’re open to discussing remote arrangements for great candidates. Salaries are in the $150k-400k range depending on experience.

Comment

https://www.lesswrong.com/posts/dLoK6KGcHAoudtwdo/arc-is-hiring?commentId=tMnATQQiEZpSg8Bnr

We are aiming to develop alignment strategies that would continue to work regardless of how far we scaled up ML or how ML models end up working internally. Is it fair to say that you are assuming that the AI systems are in fact based on ML, and not some other kind of AI (e.g. GOFAI that actually works somehow, or something more exotic)?

Comment

https://www.lesswrong.com/posts/dLoK6KGcHAoudtwdo/arc-is-hiring?commentId=R8oAjB4XnsvQrAQzc

I think that "TAI is based on ML" is plausible, and responsible for a significant part of the total risk posed by AI. That said, I think our work is reasonably likely to be useful even in other worlds (since the same basic difficulties seem likely to arise in different forms) and that it’s useful to think concretely about something that exists today regardless of whether ML is a central ingredient in future AI systems. Prosaic AI alignment is still a reasonable representation of my position.