Introducing the Intelligent Agent Foundations Forum

 |   |  News

IAFFToday we are proud to publicly launch the Intelligent Agent Foundations Forum (RSS), a forum devoted to technical discussion of the research problems outlined in MIRI’s technical agenda overview, along with similar research problems.

Patrick’s welcome post explains:

Broadly speaking, the topics of this forum concern the difficulties of value alignment- the problem of how to ensure that machine intelligences of various levels adequately understand and pursue the goals that their developers actually intended, rather than getting stuck on some proxy for the real goal or failing in other unexpected (and possibly dangerous) ways. As these failure modes are more devastating the farther we advance in building machine intelligences, MIRI’s goal is to work today on the foundations of goal systems and architectures that would work even when the machine intelligence has general creative problem-solving ability beyond that of its developers, and has the ability to modify itself or build successors.

The forum has been privately active for several months, so many interesting articles have already been posted, including:

Also see How to contribute.



  • Mark Waser

    All the links give the following error

    502 Bad Gateway

    • Luke Muehlhauser

      The site was down for a bit; it’s back up now.

  • Kevin Lynch

    The one big question I have: We don’t even understand the basis of human sentience. If an AI achieves sentience in a way that is dissimilar to our own, how are we going to communicate with it? Will it be able to recognize us as fellow intelligent creatures? If it doesn’t, then the best thing we can hope for is that it ignores us all together. Does anybody have a good idea about this one?