Announcing the new AI Alignment Forum

 |   |  Guest Posts, News

This is a guest post by Oliver Habryka, lead developer for LessWrong. Our gratitude to the LessWrong team for the hard work they’ve put into developing this resource, and our congratulations on today’s launch!


I am happy to announce that after two months of open beta, the AI Alignment Forum is launching today. The AI Alignment Forum is a new website built by the team behind LessWrong 2.0, to help create a new hub for technical AI alignment research and discussion.

One of our core goals when we designed the forum was to make it easier for new people to get started on doing technical AI alignment research. This effort was split into two major parts:

1. Better introductory content

We have been coordinating with AI alignment researchers to create three new sequences of posts that we hope can serve as introductions to some of the most important core ideas in AI Alignment. The three new sequences will be:

Over the next few weeks, we will be releasing about one post per day from these sequences, starting with the first post in the Embedded Agency sequence today.

If you are interested in learning about AI alignment, I encourage you to ask questions and discuss the content in the comment sections. And if you are already familiar with a lot of the core ideas, then we would greatly appreciate feedback on the sequences as we publish them. We hope that these sequences can be a major part of how new people get involved in AI alignment research, and so we care a lot of their quality and clarity.

2. Easier ways to join the discussion

Most scientific fields have to balance the need for high-context discussion with other specialists, and public discussion which allows the broader dissemination of new ideas, the onboarding of new members and the opportunity for new potential researchers to prove themselves. We tried to design a system that still allows newcomers to participate and learn, while giving established researchers the space to have high-level discussions with other researchers.

To do that, we integrated the new AI Alignment Forum closely with the existing LessWrong platform, as follows:

  • Any new post or comment on the new AI Alignment Forum is automatically cross-posted to LessWrong.com. Accounts are also shared between the two platforms.
  • Any comment and post on LessWrong can be promoted by members of the Alignment Forum from LessWrong to the AI Alignment Forum.
  • The reputation systems for LessWrong and the AI Alignment Forum are separate, and for every user, post and comment, you can see two reputation scores on LessWrong.com: a primary karma score combining karma from both sites, and a secondary karma score specific to AI Alignment Forum members.
  • Any member whose content gets promoted on a frequent basis, and who garners a significant amount of karma from AI Alignment Forum members, will be automatically recommended to the AI Alignment Forum moderators as a candidate addition to the Alignment Forum.

We hope that this will result in a system in which cutting-edge research and discussion can happen, while new good ideas and participants can get noticed and rewarded for their contributions.

If you’ve been interested in doing alignment research, then I think the best way to do that right now is to comment on AI Alignment Forum posts on LessWrong, and check out the new content we’ll be rolling out.


In an effort to centralize the existing discussion on technical AI alignment, this new forum is also going to replace the Intelligent Agent Foundations Forum, which MIRI built and maintained for the past two years. We are planning to shut down IAFF over the coming weeks, and collaborated with MIRI to import all the content from the forum, as well ensure that all old URLs are properly forwarded to their respective addresses on the new site. If you contributed there, you should have received an email about the details of importing your content. (If you didn’t, send us a message in the Intercom chat at the bottom right corner at AlignmentForum.org.)

Thanks to MIRI for helping us build this project, and I am looking forward to seeing a lot of you participate in discussion of the AI alignment problem on LessWrong and the new forum.