Question: What are good resources on AI alignment?
Answer: These are useful things to link to when editing Stampy!
- [https://www.youtube.com/c/RobertMilesAI/videos Rob's YouTube videos] ([https://www.youtube.com/watch?v꞊tlS5Y2vm02c&list꞊PLqL14ZxTTA4fRMts7Af2G8t4Rp17e8MdS&index꞊4 Computerphile appearances])
- [https://ai-safety-papers.quantifieduncertainty.org/ AI Safety Papers database] - Search and interface for the [https://www.lesswrong.com/posts/4DegbDJJiMX2b3EKm/tai-safety-bibliographic-database TAI Safety Bibliography]
- [https://www.eacambridge.org/agi-safety-fundamentals AGI Safety Fundamentals Course]
- [https://www.alignmentforum.org/tags/ Alignment Forum] tags
- [https://rohinshah.com/alignment-newsletter/ The Alignment Newsletter] (and [https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit#gid꞊0 database sheet])
- Chapters of [https://publicism.info/philosophy/superintelligence/ Bostrom's Superintelligence online] - [https://www.nickbostrom.com/views/superintelligence.pdf Initial paper which Superintelligence grew from]
- [https://arbital.greaterwrong.com/explore/ai_alignment/ AI Alignment pages on Arbital]
- Much more on [https://www.aisafetysupport.org/resources/lots-of-links AI Safety Support] (feel free to integrate useful things from there to here)
- [https://vkrakovna.wordpress.com/ai-safety-resources/ Vika's resources list]
- [https://docs.google.com/spreadsheets/d/1QSEWjXZuqmG6ORkig84V4sFCldIntyuQj7yq3gkDo0U/edit#gid꞊0 AI safety technical courses, reading lists, and curriculums]
- [https://aisafety.wordpress.com/ AI Safety Intro blog]
- [https://stampy.ai/wiki/Canonical_answers Stampy's canonical answers list] - This includes updated versions of various [[Imported FAQs┊FAQs imported with permission]]: ** [https://www.lesswrong.com/posts/LTtNXM9shNM9AC2mp/superintelligence-faq Scott Alexander's Superintelligence FAQ] ** [https://futureoflife.org/ai-faqs/ FLI's FAQ] ** [https://intelligence.org/faq/ MIRI's FAQ] ** [https://intelligence.org/ie-faq/ MIRI's Intelligence Explosion FAQ] ** [https://rohinshah.com/faq-career-advice-for-ai-alignment-researchers/ Advice for AI Alignment Researchers] ** [https://www.reddit.com/r/ControlProblem/wiki/faq r/ControlProblem's FAQ] ** [https://markxu.com/ai-safety-faqs Mark Xu's FAQ] ** [https://aisafety.wordpress.com/ AI safety blog] - Not yet imported.