MIRI’s April newsletter: Relaunch Celebration and a New Math Result

 |   |  Newsletters


Greetings from The Executive Director

Dear friends,

These are exciting times at MIRI.

After years of awareness-raising and capacity-building, we have finally transformed ourselves into a research institute focused on producing the mathematical research required to build trustworthy (or “human-friendly”) machine intelligence. As our most devoted supporters know, this has been our goal for roughly a decade, and it is a thrill to have made the transition.

It is also exciting to see how much more quickly one can get academic traction with mathematics research, as compared to philosophical research and technological forecasting research. Within hours of publishing a draft of our first math result, Field Medalist Timothy Gowers had seen the draft and commented on it (here), along with several other professional mathematicians.

We celebrated our “relaunch” at an April 11th party in San Francisco. It was a joy to see old friends and make some new ones. You can see photos and read some details below.

For more detail on our new strategic priorities, see our blog post: MIRI’s Strategy for 2013.

Cheers,

Luke Muehlhauser
Executive Director

MIRI Relaunch Celebration in San Francisco

On April 11th, at HUB San Francisco, MIRI celebrated its name change and its “relaunch” as a mathematics research institute. The party was also a celebration of our ongoing 2nd research workshop, featuring MIRI research fellow Eliezer Yudkowsky and 11 visiting researchers from North America and Europe. About 50 people attended the party.

Our party included a short presentation by visiting researcher Qiaochu Yuan (UC Berkeley). Qiaochu (pronounced, as he likes to explain, “chow like food and chew also like food”) explained one of the open problems on MIRI’s research agenda: the Löbian obstacle to self-modifying systems. He explained why we’d want an AI to be able to trust its successor AIs, why Löb’s Theorem is an obstacle to that, and how the new probabilistic logic from our 1st research workshop might lead to a solution.

In addition to the usual food and drinks, our party was supplied with poster boards on easels so that the researchers in attendance could explain pieces of their work to anyone who was interested — or, people could just doodle. 🙂

Additional photos from the event will be published soon — stay tuned via our blog or our Facebook page.

MIRI’s First Math Result

November 11-18, 2012, we held (what we now call) the 1st MIRI Workshop on Logic, Probability, and Reflection. This workshop included 4 participants, and resulted in the discovery of a kind of “loophole” in Tarski’s undefinability theorem (1936) which may lead to a solution for the Löbian obstacle to trustworthy self-modification. We published an early version of the paper explaining this result on March 22nd, and the latest draft lives here: Definability of “Truth” in Probabilistic Logic. The paper’s lead author is visiting researcher Paul Christiano (UC Berkeley).

Eliezer’s post Reflection in Probabilistic Set Theory explains the meaning of the result, and also comments on how the result was developed:

Paul Christiano showed up with the idea (of consistent probabilistic reflection via a fixed-point theorem) to a week-long [MIRI research workshop] with Marcello Herreshoff, Mihaly Barasz, and myself; then we all spent the next week proving that version after version of Paul’s idea couldn’t work or wouldn’t yield self-modifying AI; until finally… it produced something that looked like it might work. If we hadn’t been trying to solve this problem… [then] this would be just another batch of impossibility results in the math literature. I remark on this because it may help demonstrate that Friendly AI is a productive approach to math qua math, which may aid some mathematician in becoming interested.

The participants of our ongoing 2nd MIRI Workshop on Logic, Probability, and Reflection are continuing to develop this result to examine its chances for resolving the Löbian obstacle to trustworthy self-modification — or, as workshop participant Daniel Dewey (Oxford) called it, the “Löbstacle.”

Proofreaders Needed

Several MIRI research articles are being held up from publication due to a lack of volunteer proofreaders, including Eliezer Yudkowsky’s “Intelligence Explosion Microeconomics.”

Want to be a proofreader for MIRI? Here are some reasons to get involved:

  • Get a sneak peek at our publications before they become publicly available.
  • Earn points at MIRIvolunteers.org, our online volunteer system that runs on MIRIvolunteers.org, our online volunteer system that runs on Youtopia. (Even if you’re not interested in the points, tracking your time through Youtopia helps us manage and quantify the volunteer proofreading effort.)
  • Having polished and well-written publications is of high-value to MIRI.
  • Help speed up our publication process. Proofreading is currently our biggest bottle-neck.

For more details on how you can sign up as a MIRI proofreader, see here.

Facing the Intelligence Explosion Published

Facing the Intelligence Explosion is now available as an ebook! You can get it here.

It is available as a “pay-what-you-want” package that includes the ebook in three formats: MOBI, EPUB, and PDF.

It is also available on Amazon Kindle (US, Canada, UK, and most others) and the Apple iBookstore (US, Canada, UK and most others).

All sources are DRM-free. Grab a copy, share it with your friends, and review it on Amazon or the iBookstore.

All proceeds go directly to funding the technical and strategic research of the Machine Intelligence Research Institute.

Efficient Charity Article

In 2011, Holden Karnofsky of Givewell wrote a series of posts on the topic of “efficient charity“: how to get the most bang for your philanthropic buck. Karnofsky argued for a particular method of estimating the expected value of charitable donations, a method he called “Bayesian Adjustment.” Some readers interpreted this method as providing an a priori judgment that existential risk reduction charities (such as MIRI) could not be efficient uses of philanthropic dollars. (Karnofsky denies that interpretation.)

Karnofsky’s argument is subtle and complicated, but important. Since MIRI is also interested in the subject of efficient charity, we worked with Steven Kaas to produce a reply to Karnofsky’s posts, titled Bayesian Adjustment Does Not Defeat Existential Risk Charity. We do not think this resolves our points of disagreement with Karnofsky, but it does move the dialogue one step forward. Karnofsky has since replied to our article in two comments (one, two), and we expect the dialogue will continue for some time.

Appreciation of Ioven Fables

Due to changes in MIRI’s operational needs resulting from our transition to more technical research, MIRI no longer requires a full-time executive assistant, and thus our current executive assistant Ioven Fables (LinkedIn) will be stepping down this month. Ioven continues to support our mission, and he may perform occasional contracting work for us in the future.

It was a pleasure for me to work with Ioven over the past 11 months. He played a major role in transforming MIRI into a more robust and efficient organization, and his consistent cheer and professionalism will be missed. I recommend his services to anyone looking to hire someone to help with operations and development work at their organization or company.

Ioven: Thanks so much for your service to MIRI! I enjoyed working with you, and I wish you the best of luck.

Luke Muehlhauser