printouts
{"Answer":["The '''Stampy project''' is open effort to build a comprehensive FAQ about [https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence artificial intelligence existential safety]—the field trying to make sure that when we build [https://en.wikipedia.org/wiki/Superintelligence superintelligent] [https://www.alignmentforum.org/tag/ai artificial systems] they are [https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/ aligned] with [https://www.lesswrong.com/tag/human-values human values] so that they do things compatible with our survival and flourishing.<br><br>We're also building a web UI ([https://stampy-ui.aprillion.workers.dev/ early prototype]) and [https://discord.gg/cEzKz8QCpa bot interface], so you'll soon be able to browse the FAQ and other [[sources]] in a cleaner way than the wiki.\n\nThe goals of the project are to:\n\n* Offer a one-stop-shop for high-quality [[answers]] to common questions about AI alignment.\n** Let people answer questions in a way which scales, freeing up researcher time while allowing more people to learn from a reliable source.\n** Make [[external resources]] more easy to find by having links to them connected to a search engine which gets smarter the more it's used.\n* Provide a form of [https://en.wikipedia.org/wiki/Legitimate_peripheral_participation legitimate peripheral participation] for the AI Safety community, as an on-boarding path with a flexible level of commitment.\n** Encourage people to think, read, and talk about AI alignment while answering questions, creating a community of co-learners who can give each other feedback and social reinforcement.\n** Provide a way for budding researchers to prove their understanding of the topic and ability to produce good work.\n* Collect data about the kinds of questions people actually ask and how they respond, so we can better focus resources on answering them.\n** Track reactions on messages so we can learn which answers need work.\n** Identify [[missing external content]] to create.\n\nIf you would like to help out, join us on the [https://discord.gg/X3XaytCGhr Discord] and either jump right into editing or read [[get involved]] for answers to common questions."],"StampedBy":["plex"],"Tags":[{"fulltext":"Stampy","fullurl":"https://stampy.ai/wiki/Stampy","namespace":0,"exists":"1","displaytitle":""}]}
answer
["The '''Stampy project''' is open effort to build a comprehensive FAQ about [https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence artificial intelligence existential safety]—the field trying to make sure that when we build [https://en.wikipedia.org/wiki/Superintelligence superintelligent] [https://www.alignmentforum.org/tag/ai artificial systems] they are [https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/ aligned] with [https://www.lesswrong.com/tag/human-values human values] so that they do things compatible with our survival and flourishing.<br><br>We're also building a web UI ([https://stampy-ui.aprillion.workers.dev/ early prototype]) and [https://discord.gg/cEzKz8QCpa bot interface], so you'll soon be able to browse the FAQ and other [[sources]] in a cleaner way than the wiki.\n\nThe goals of the project are to:\n\n* Offer a one-stop-shop for high-quality [[answers]] to common questions about AI alignment.\n** Let people answer questions in a way which scales, freeing up researcher time while allowing more people to learn from a reliable source.\n** Make [[external resources]] more easy to find by having links to them connected to a search engine which gets smarter the more it's used.\n* Provide a form of [https://en.wikipedia.org/wiki/Legitimate_peripheral_participation legitimate peripheral participation] for the AI Safety community, as an on-boarding path with a flexible level of commitment.\n** Encourage people to think, read, and talk about AI alignment while answering questions, creating a community of co-learners who can give each other feedback and social reinforcement.\n** Provide a way for budding researchers to prove their understanding of the topic and ability to produce good work.\n* Collect data about the kinds of questions people actually ask and how they respond, so we can better focus resources on answering them.\n** Track reactions on messages so we can learn which answers need work.\n** Identify [[missing external content]] to create.\n\nIf you would like to help out, join us on the [https://discord.gg/X3XaytCGhr Discord] and either jump right into editing or read [[get involved]] for answers to common questions."]