January 2017 Newsletter

Posted by & filed under Newsletters.

Eliezer Yudkowsky’s new introductory talk on AI safety is out, in text and video forms: “The AI Alignment Problem: Why It’s Hard, and Where to Start.” Other big news includes the release of version 1 of Ethically Aligned Design, an IEEE recommendations document with a section on artificial general intelligence that we helped draft. Research updates… Read more »

New paper: “Optimal polynomial-time estimators”

Posted by & filed under Papers.

MIRI Research Associate Vadim Kosoy has developed a new framework for reasoning under logical uncertainty, “Optimal polynomial-time estimators: A Bayesian notion of approximation algorithm.” Abstract: The concept of an “approximation algorithm” is usually only applied to optimization problems, since in optimization problems the performance of the algorithm on any given input is a continuous parameter…. Read more »

December 2016 Newsletter

Posted by & filed under Newsletters.

We’re in the final weeks of our push to cover our funding shortfall, and we’re now halfway to our $160,000 goal. For potential donors who are interested in an outside perspective, Future of Humanity Institute (FHI) researcher Owen Cotton-Barratt has written up why he’s donating to MIRI this year. (Donation page.) Research updates New at IAFF:… Read more »

November 2016 Newsletter

Posted by & filed under Newsletters.

Post-fundraiser update: Donors rallied late last month to get us most of the way to our first fundraiser goal, but we ultimately fell short. This means that we’ll need to make up the remaining $160k gap over the next month if we’re going to move forward on our 2017 plans. We’re in a good position… Read more »

White House submissions and report on AI safety

Posted by & filed under News.

In May, the White House Office of Science and Technology Policy (OSTP) announced “a new series of workshops and an interagency working group to learn more about the benefits and risks of artificial intelligence.” They hosted a June Workshop on Safety and Control for AI (videos), along with three other workshops, and issued a general… Read more »

MIRI AMA, and a talk on logical induction

Posted by & filed under News, Video.

Nate, Malo, Jessica, Tsvi, and I will be answering questions tomorrow at the Effective Altruism Forum. If you’ve been curious about anything related to our research, plans, or general thoughts, you’re invited to submit your own questions in the comments below or at Ask MIRI Anything. We’ve also posted a more detailed version of our… Read more »

October 2016 Newsletter

Posted by & filed under Newsletters.

Our big announcement this month is our paper “Logical Induction,” introducing an algorithm that learns to assign reasonable probabilities to mathematical, empirical, and self-referential claims in a way that outpaces deduction. MIRI’s 2016 fundraiser is also live, and runs through the end of October.   Research updates Shtetl-Optimized and n-Category Café discuss the “Logical Induction”… Read more »

September 2016 Newsletter

Posted by & filed under Newsletters.

Research updates New at IAFF: Modeling the Capabilities of Advanced AI Systems as Episodic Reinforcement Learning; Simplified Explanation of Stratification New at AI Impacts: Friendly AI as a Global Public Good We ran two research workshops this month: a veterans’ workshop on decision theory for long-time collaborators and staff, and a machine learning workshop focusing on generalizable environmental goals, impact… Read more »