MIRI Updates

Response to Cegłowski on superintelligence

Web developer Maciej Cegłowski recently gave a talk on AI safety (video, text) arguing that we should be skeptical of the standard assumptions that go into working on this problem, and doubly skeptical of the extreme-sounding claims, attitudes, and policies...

January 2017 Newsletter

Eliezer Yudkowsky’s new introductory talk on AI safety is out, in text and video forms: “The AI Alignment Problem: Why It’s Hard, and Where to Start.” Other big news includes the release of version 1 of Ethically Aligned Design, an...

New paper: “Optimal polynomial-time estimators”

MIRI Research Associate Vanessa Kosoy has developed a new framework for reasoning under logical uncertainty, “Optimal polynomial-time estimators: A Bayesian notion of approximation algorithm.” Abstract: The concept of an “approximation algorithm” is usually only applied to optimization problems, since in...

AI Alignment: Why It’s Hard, and Where to Start

Back in May, I gave a talk at Stanford University for the Symbolic Systems Distinguished Speaker series, titled “The AI Alignment Problem: Why It’s Hard, And Where To Start.” The video for this talk is now available on Youtube:  ...

December 2016 Newsletter

We’re in the final weeks of our push to cover our funding shortfall, and we’re now halfway to our $160,000 goal. For potential donors who are interested in an outside perspective, Future of Humanity Institute (FHI) researcher Owen Cotton-Barratt has...

November 2016 Newsletter

Post-fundraiser update: Donors rallied late last month to get us most of the way to our first fundraiser goal, but we ultimately fell short. This means that we’ll need to make up the remaining $160k gap over the next month...

Browse
Browse
Subscribe
Follow us on