MIRI’s October Newsletter

 |   |  Newsletters

Greetings from the Executive Director

Dear friends,

The big news this month is that Paul Christiano and Eliezer Yudkowsky are giving talks at Harvard and MIT about the work coming out of MIRI’s workshops, on Oct. 15th and 17th, respectively (details below).

Meanwhile we’ve been planning future workshops and preparing future publications. Our experienced document production team is also helping to prepare Nick Bostrom‘s Superintelligence book for publication. It’s a very good book, and should be released by Oxford University Press in mid-2014.

By popular demand, MIRI research fellow Eliezer Yudkowsky now has a few “Yudkowskyisms” available on t-shirts, at Rational Attire. Thanks to Katie Hartman and Michael Keenan for setting this up.

Cheers,

Luke Muehlhauser
Executive Director

Upcoming Talks at Harvard and MIT

If you live near Boston, you’ll want to come see Eliezer Yudkowsky give a talk about MIRI’s research program in the spectacular Stata building on the MIT campus, on October 17th.

His talk is titled Recursion in rational agents: Foundations for self-modifying AI. There will also be a party the next day in MIT’s Building 6, with Yudkowsky in attendance.

Two days earlier, Paul Christiano will give a technical talk to a smaller audience about on of the key results from MIRI’s research workshops thus far. This talk is titled Probabilistic metamathematics and the definability of truth.

For more details on both talks, see the blog post here.

“Our Final Invention” Released

Barat - Our Final InventionOur Final Invention, the best book yet written about the challenges of getting good outcomes from smarter-than-human AI, has been released.

MIRI’s Luke Muehlhauser reviewed the book for the Kurzweil AI blog here, and GCRI’s Seth Baum reviewed the book for Scientific American‘s Guest Blog here. You can also read an excerpt from the book on Tor.com.

If you read the book, be sure to write a quick review of it on Amazon.com.

New Analyses and Conversations

Much of MIRI’s research is published directly to our blog. Since our last newsletter, we’ve published the following conversations:

Paul Rosenbloom on Cognitive Architectures. For decades, Rosenbloom was a project manager for Soar, perhaps the earliest AGI project. In this interview, he discussed his new cognitive architecture project, Sigma.

Effective Altruism and Flow-Through Effects. Carl Shulman, who was at the time a MIRI research fellow, participated in a conversation about effective altruism and flow-through effects. This issue is highly relevant to MIRI’s mission, since MIRI focuses on activities that are intended to produce altruistic value via their flow-through effects on the invention of AGI. The other participants were FHI’s Nick Beckstead, UC Berkeley’s Paul Christiano, GiveWell’s Holden Karnofsky, and CEA’s Rob Wiblin. 

We’ve also published two new research analyses:

How well will policy-makers handle AGI? One question relevant to superintelligence strategy is: “How well should we expect policy makers to handle the invention of AGI, and what does this imply about how much effort to put into AGI risk mitigation vs. other concerns?” To investigate this question, we asked Jonah Sinick to examine how well policy-makers handled past events analogous in some ways to the future invention of AGI, and summarize his findings.

Mathematical Proofs Improve But Don’t Guarantee Security, Safety, and Friendliness. The approaches sometimes called “provable security,” “provable safety,” and “provable friendliness” should not be misunderstood as offering 100% guarantees of security, safety, and friendliness. Rather, these approaches are meant to provide more confidence than we could otherwise have, all else equal, that a given system is secure, safe, or “friendly.” Especially for something as complex as Friendly AI, our message is: “If we prove it correct, it might work. If we don’t prove it correct, it definitely won’t work.”

Double Your Donations via Corporate Matching

MIRI has now partnered with Double the Donation, a company that makes it easier for donors to take advantage of donation matching programs offered by their employers.

More than 65% of Fortune 500 companies match employee donations, and 40% offer grants for volunteering, but many of these opportunities go unnoticed. Most employees don’t know these programs exist!

Go to MIRI’s Double the Donation page to find out whether your employer can match your donations to MIRI.

Rockstar Research Magazine

Want an intelligent, independent source of news that isn’t just the same “hot,” “trending” stories everyone else is covering?

Many of MIRI’s fans have told us they like Rockstar Research Magazine, MIRI Deputy Director Louie Helm‘s daily news brief on AI, life hacking, effective altruism, rationality, and independent research.  Recent stories include:

Visit Rockstar Research to browse past stories and sign up to receive weekly updates.

Featured Volunteer: Mallory Tackett

Mallory TackettMallory Tackett helps MIRI with publicity tasks. She is a physics undergraduate researching neural networks.

Mallory learned about MIRI through Randal Koene when she was studying consciousness and whole brain emulation. She agrees with MIRI’s message that caution must be taken in developing AGI.

Eventually, Mallory would like to participate in MIRI’s research workshops. She would also like to spread awareness about the possibilities of whole brain emulation and virtual reality.