Announcing the Alignment Newsletter

https://www.lesswrong.com/posts/RvysgkLAHvsjTZECW/announcing-the-alignment-newsletter

I’ve been writing weekly emails for the Center for Human-Compatible AI (CHAI) summarizing the content from the last week that’s relevant to AI alignment. These have been useful enough that I’m now making them public! You can:

Comment

https://www.lesswrong.com/posts/RvysgkLAHvsjTZECW/announcing-the-alignment-newsletter?commentId=qLjCmxhmaef4Kipos

Since people seem to be finding it useful, I just updated the archive with public versions of the 5 emails I wrote for CHAI summarizing ~2 months of content.

Comment

https://www.lesswrong.com/posts/RvysgkLAHvsjTZECW/announcing-the-alignment-newsletter?commentId=EH3xRe7kruACWttQx

Huh, this makes me much more excited for the email—having your brief personal reviews of whether it’s useful to read and why is great!

https://www.lesswrong.com/posts/RvysgkLAHvsjTZECW/announcing-the-alignment-newsletter?commentId=6WTjNANJucKGx565k

Signed up! Thanks.