Eliezer Yudkowsky’s new introductory talk on AI safety is out, in text and video forms: “The AI Alignment Problem: Why It’s Hard, and Where to Start.” Other big news includes the release of version 1 of Ethically Aligned Design, an...
MIRI Research Associate Vanessa Kosoy has developed a new framework for reasoning under logical uncertainty, “Optimal polynomial-time estimators: A Bayesian notion of approximation algorithm.” Abstract: The concept of an “approximation algorithm” is usually only applied to optimization problems, since in...
We’re in the final weeks of our push to cover our funding shortfall, and we’re now halfway to our $160,000 goal. For potential donors who are interested in an outside perspective, Future of Humanity Institute (FHI) researcher Owen Cotton-Barratt has...
Post-fundraiser update: Donors rallied late last month to get us most of the way to our first fundraiser goal, but we ultimately fell short. This means that we’ll need to make up the remaining $160k gap over the next month...
In May, the White House Office of Science and Technology Policy (OSTP) announced “a new series of workshops and an interagency working group to learn more about the benefits and risks of artificial intelligence.” They hosted a June Workshop on...
Nate, Malo, Jessica, Tsvi, and I will be answering questions tomorrow at the Effective Altruism Forum. If you’ve been curious about anything related to our research, plans, or general thoughts, you’re invited to submit your own questions in the comments...