AI risk-related improvements to the LW wiki

https://www.lesswrong.com/posts/uEj6m3Qv84jfQios7/ai-risk-related-improvements-to-the-lw-wiki

Back in May, Luke suggested the creation of a scholarly AI risk wiki, which was to include a large set of summary articles on topics related to AI risk, mapped out in terms of how they related to the central debates about AI risk. In response, Wei Dai suggested that among other things, the existing Less Wrong wiki could be improved instead. As a result, the Singularity Institute has massively improved the LW wiki, in preparation for a more ambitious scholarly AI risk wiki. The outcome was the creation or dramatic expansion of the following articles:

In managing the project, I focused on content over presentation, so a number of articles still have minor issues such as the grammar and style having room for improvement. It’s our hope that, with the largest part of the work already done, the LW community will help improve the articles even further.

Thanks to everyone who worked on these pages: Alex Altair, Adam Bales, Caleb Bell, Costanza Riccioli, Daniel Trenor, João Lourenço, Joshua Fox, Patrick Rhodes, Pedro Chaves, Stuart Armstrong, and Steven Kaas.

Comment

https://www.lesswrong.com/posts/uEj6m3Qv84jfQios7/ai-risk-related-improvements-to-the-lw-wiki?commentId=QRzAaeAzcbHAxe9SJ

I’ve watched a lot of these edits through the RSS feed as part of my daily spam-fighting; good work everyone!

Comment

https://www.lesswrong.com/posts/uEj6m3Qv84jfQios7/ai-risk-related-improvements-to-the-lw-wiki?commentId=Y6Wn8YpvXodvGv75E

Give this man some upvotes for his daily spam-fighting, as well as for his assistance when auto-bans targeted at spammers accidentally hit us. :)

https://www.lesswrong.com/posts/uEj6m3Qv84jfQios7/ai-risk-related-improvements-to-the-lw-wiki?commentId=twSbsJAkrevzamFr9

Great work! That is a lot of updated pages.

Comment

https://www.lesswrong.com/posts/uEj6m3Qv84jfQios7/ai-risk-related-improvements-to-the-lw-wiki?commentId=FvXYihctAh9bvvwgD

Thanks. :)

https://www.lesswrong.com/posts/uEj6m3Qv84jfQios7/ai-risk-related-improvements-to-the-lw-wiki?commentId=KLpPstphDxW2Knmez

This is awesome. Thanks for doing all that work.

Comment

https://www.lesswrong.com/posts/uEj6m3Qv84jfQios7/ai-risk-related-improvements-to-the-lw-wiki?commentId=FFqypeH3v7mJjtJ6q

Thanks. :)

https://www.lesswrong.com/posts/uEj6m3Qv84jfQios7/ai-risk-related-improvements-to-the-lw-wiki?commentId=ssifLFaiMinMyDj4o

LW wiki articles I wish LWers would write/​expand:

  • Iterated embryo selection (update: AlexMennen wrote it)

  • Doomsday argument (update: AlexMennen wrote it)

  • Simpleton gambit (update: AlexMennen wrote it)

  • Delusion box (update: AlexMennen wrote it

  • Causality

  • Robot’s Rebellion

  • Dysrationalia

  • Epistemic prisoner’s dilemma (update: D_Malik wrote it)

  • Counterfactual resiliency (update: AlexMennen wrote it

  • Personal identity (update: AlexMennen wrote it)

  • Adversarial collaboration (update: AlexMennen wrote it)

  • Imagination inflation

https://www.lesswrong.com/posts/uEj6m3Qv84jfQios7/ai-risk-related-improvements-to-the-lw-wiki?commentId=EajRqsnSTyFAsg4SB

It’s our hope that, with the largest part of the work already done, the LW community will help improve the articles even further.

Has someone watchlisted these pages to make sure no one accidentally makes them less accurate in the process of improving their presentation?

https://www.lesswrong.com/posts/uEj6m3Qv84jfQios7/ai-risk-related-improvements-to-the-lw-wiki?commentId=nSq9DbKDuWkuLGAFG

I am pretty excited about the AI risk wiki.

https://www.lesswrong.com/posts/uEj6m3Qv84jfQios7/ai-risk-related-improvements-to-the-lw-wiki?commentId=jy7JWfuP5MHjT9SHR

A key element in making use of this Wiki will be to set up a system that blocks spammers from registering accounts. Perhaps there should be a CAPTCHA with an answer that only a genuine Less Wronger would know? Anyone who knows how to set this up would be a tremendous help.