Machine Intelligence Research Institute Progress Report, March 2012

 |   |  News

Past progress reports: February 2012January 2012December 2011.

Fun fact of the day: The Machine Intelligence Research Institute’s research fellows and research associates have more peer-reviewed publications forthcoming in 2012 than they had published in all past years combined.

Here’s what the Machine Intelligence Research Institute did in March 2012:

  • Research articles: Luke and Anna released an updated draft of Intelligence Explosion: Evidence and Import, and Luke and Louie released an updated draft of The Singularity and Machine Ethics. Luke submitted an article (co-authored with Nick Bostrom) to Communications of the ACM — an article on Friendly AI. Machine Intelligence Research Institute research associate Joshua Fox released two forthcoming articles co-authored with (past Machine Intelligence Research Institute Visiting Fellow) Roman Yampolskiy: Safety Engineering for Artificial General Intelligence and Artificial General Intelligence and the Human Mental Model.
  • Other articles: Luke published The AI Problem, with Solutions, How to Fix Science, Muehlhauser-Goertzel Dialogue Part 1, a list of journals that may publish articles on AI risk, and the first three posts in his series AI Risk and Opportunity: A Strategic Analysis. The Machine Intelligence Research Institute paid (past Visiting Fellow) Kaj Sotala to write most of a new instructional booklet for Less Wrong meetup group organizers, which should be published in the next month or two. Eliezer continued work on his new Bayes’ Theorem tutorial and other writing projects. Carl published Using degrees of freedom to change the past for fun and profit and Are pain and pleasure equally energy efficient?
  • Ongoing long-term projects: Amy continued to work on Singularity Summit 2012. Michael continued to work on the Machine Intelligence Research Institute’s new primary website, new Summit website, new annual report, and new newsletter design. Louie continued to improve our accounting processes and also handled several legal and tax issues. Luke uploaded several more volunteer-prepared translations of Facing the Singularity. Luke also continued to build the Machine Intelligence Research Institute’s set of remote collaborators, who are hard at work converting the Machine Intelligence Research Institute’s research articles to a new template, hunting down predictions of AI, writing literature summaries on heuristics and biases, and more.
  • Rationality Group: Per our strategic plan, we will launch this new “Rationality Group” organization soon, so that the Machine Intelligence Research Institute can focus on its efforts on activities related to AI risk. In March, Rationality Group (led by Anna) contracted with Julia Galef and Michael Smith to work toward launching the organization. Eliezer continued to help Rationality Group develop and test its lessons. Rationality Group has begun offering prizes for suggesting exercises for developing rationality skills, starting with the skills of “Be Specific” and “Check Consequentialism.” Rationality Group has also announced three Minicamps on Rationality and Awesomeness, for May 11-13, June 22-24, and July 21-28. Apply now.
  • Meetings with advisors, supporters, and potential researchers: As usual, various SI staff met or spoke with dozens of advisors, supporters, and collaborators about how to build the existential risk community, how to mitigate AI risk, how to improve the Machine Intelligence Research Institute’s effectiveness, and other topics. This included a two-week visit by Nick Beckstead, who worked with us on AI risk reduction strategy.
  • And of course much more than is listed here!

Finally, we’d like to recognize our most active volunteers in March 2012: Matthew Fallshaw, Gerard McCusker, Frank Adamek, and David Althaus. Thanks everyone! (And, our apologies if we forgot to name you!)

Machine Intelligence Research Institute Progress Report, February 2012

 |   |  News

Past progress reports: January 2012December 2011.

Here’s what the Machine Intelligence Research Institute did in February 2012:

  • Winter fundraiser completed: Thanks to the generous contributions of our supporters, our latest winter fundraiser was a success, raising much more than our target of $100,000!
  • Research articles: Luke and Anna published the Singularity Summit 2011 Workshop Report and released a draft of their article Intelligence Explosion: Evidence and Import, forthcoming in Springer’s The Singularity Hypothesis. Luke also worked on an article forthcoming in Communications of the ACM.
  • Other articles: Luke published a continuously updated list of Forthcoming and desired articles on AI risk. For Less Wrong, Carl published Feed the Spinoff Heuristic, and Luke published My Algorithm for Beating ProcrastinationA brief tutorial on preferences in AI, and Get Curious. Carl also published 4 articles on ethical careers for the 80,000 Hours blog (later posts will discuss optimal philanthropy and existential risks): How hard is it to become the Prime Minister of the United Kingdom?Entrepreneurship: a game of poker, not rouletteSoftware engineering: Britain vs. Silicon Valley, and 5 ways to be misled by salary rankings.
  • Ongoing long-term projects: Amy continued to work on Singularity Summit 2012. Michael continued to work on the Machine Intelligence Research Institute’s new website, and uploaded all past Singularity Summit videos to YouTube. Louie continued to improve our accounting processes and also handled several legal and tax issues. Luke uploaded several volunteer-prepared translations of Facing the Singularity, and also a podcast for this online mini-book.
  • Grant awarded: The Machine Intelligence Research Institute awarded philosopher Rachael Briggs a $20,000 grant to write a paper on Eliezer Yudkowsky’s timeless decision theory. Two of Rachael’s papers — Distorted Reflection and Decision-Theoretic Paradoxes as Voting Paradoxes — have previously been selected as among the 10 best philosophy papers of the year by The Philosopher’s Annual.
  • Rationality Group: Anna and Eliezer continued to lead the development of a new rationality education organization, temporarily called “Rationality Group.” Per our strategic plan, we will launch this new organization soon, so that the Machine Intelligence Research Institute can focus on its efforts on activities related to AI risk. In February our Rationality Group team worked on curriculum development with several potential long-term hires, developed several rationality lessons which they tested (weekly) on small groups and iterated in response to feedback, spoke to advisors about how to build the organization and gather fundraising, and much more. The team also produced one example rationality lesson on sunk costs, including a presentation and exercise booklets. Note that Rationality Group is currently hiring curriculum developers, a remote executive assistant, and others, so apply here if you’re interested!
  • Meetings with advisors, supporters, and potential researchers: As usual, various SI staff met or spoke with dozens of advisors, supporters, and collaborators about how to build the existential risk community, how to mitigate AI risk, how to improve the Machine Intelligence Research Institute’s effectiveness, and other topics. We also met with several potential researchers to gauge their interest and abilities. Carl spent two weeks in Oxford visiting the Future of Humanity Institute and working with the researchers there.
  • Outsourcing: On Louie’s (sound) advice, the Machine Intelligence Research Institute is undergoing a labor transition such that most of the work we do (in hours) will eventually be performed not by our core staff but by (mostly remote) hourly contractors and volunteers, for example remote researchers, remote LaTeX workers, remote editors, and remote assistants. This shift provides numerous benefits, including (1) involving the broader community more directly in our work, (2) providing jobs for aspiring rationalists, and (3) freeing up our core staff to do the things that, due to accumulated rare expertise, only they can do.
  • And of course much more than is listed here!

Finally, we’d like to recognize our most active volunteers in February 2012: Brian Rabkin, Cameron Taylor, Mitchell Owen, Gerard McCusker, Alex Richard, Andrew Homan, Vincent Vu, Gabriel Sztorc, Paul Gentemann, John Maxwell, and David Althaus. Thanks everyone! (And, our apologies if we forgot to name you!)

2011-2012 Winter Fundraiser Completed

 |   |  News

Thanks to our dedicated supporters, we met our goal for our 2011-2012 Winter Fundraiser. Thank you!

The fundraiser ran for 56 days, from December 27, 2011 to February 20, 2012.

We exceeded our $100K goal, raising a total of $143,048.84 from 101 individual donors.

Every donation that the Machine Intelligence Research Institute receives is powerful support for our mission — ensuring that the creation of smarter-than-human intelligence (superintelligence) benefits human society. We welcome donors contacting us to learn more about our pursuit of this mission and our continued expansion.

Keep your eye on this blog for regular progress reports from our executive director.

Machine Intelligence Research Institute Progress Report, January 2012

 |   |  News

Past progress report: December 2011.

Here’s what the Machine Intelligence Research Institute did in January 2012:

  • Winter fundraiser: We continued raising funds in January, but we still have about $30,000 left to go in our winter fundraiser before the deadline of February 20th. Please support our recent efforts toward greater transparency, efficiency, and productivity by donating now!
  • Strategic discussions: In January we held a long and ongoing series of discussions concerning Machine Intelligence Research Institute strategy. Which scenarios are the most probable “desirable” futures for humanity, which ones can our species influence most significantly, and which ones should the Machine Intelligence Research Institute work to influence? Which tactical moves should the Machine Intelligence Research Institute make right now? How can our efforts best create synergies with other organizations focused on existential risks? These are complex questions, and in January, Machine Intelligence Research Institute staff members spent dozens of hours sharing their own evidence and arguments. (At one point, we also called upon the expertise of more than a dozen elite mathematicians in our circle.) These discussions continue today, and our opinions on strategy appear to be more unified than they were at the beginning of the month. But there is more evidence to gather and more strategic analysis to be done.
  • Ongoing long-term projects: Amy continued her preparations for Singularity Summit 2012. Michael Anissimov and others continued work on the Machine Intelligence Research Institute’s new website, which will feature loads of new content and a cleaner design. As part of our transparency efforts, Luke gave a second Q&A about the Machine Intelligence Research Institute, an interview at 80,000 Hours, and another interview at Singularity 1 on 1. Louie continued to work on improving our book-keeping and accounting practices. Anissimov finished thanking all donors who gave during 2011. (If you donated in 2011 and were not thanked, please contact michael@singularity.org!)
  • Articles: Luke and Anna continued writing “Intelligence Explosion: Evidence and Import,” and Carl continued working with Stuart Armstrong of FHI on “Arms Races and Intelligence Explosions.” Luke began adding non-English translations at Facing the Singularity, and published No God to Save Us and Value is Complex and Fragile there. Carl, with co-author Nick Bostrom, submitted a final version of “How Hard is Artificial Intelligence?” to the Journal of Consciousness Studies. For Less Wrong, Luke published What Curiosity Looks Like, Can the Chain Still Hold You?, Leveling Up in Rationality, and The Human’s Hidden Utility Function (Maybe); Anna published Urges vs. Goals. Eliezer continued work on his new Bayes Tutorial. Luke and Anna wrote a report on the workshops that followed Singularity Summit 2011, which should be published soon.
  • Rationality Group: Anna continued to lead the development of a new rationality education organization, temporarily called “Rationality Group.” Per our strategic plan, we will launch this new organization soon, so that the Machine Intelligence Research Institute can focus on its efforts on activities related to AI risk. In January we made one trial-hire for the new organization, and reached out to dozens of other potential team members. We also published a draft of one rationality lesson as a sample (PowerPoint slides + booklet PDFs).
  • New team members: Kevin Fischer of GK International joined our board of directors. We also added several new research associates: Paul Christiano, Tyrrell McAllister, János Kramar, and Mihaly Barasz (at Google Switzerland). Luke hired an executive assistant, Denise Simard. Michael Vassar officially left his role as President to work for his new company, Personalized Medicine.
  • Meetings with advisors, supporters, and potential researchers: As usual, various SI staff met or spoke with dozens of advisors, supporters, and collaborators about how to build the existential risk community, how to mitigate AI risk, how to improve the Machine Intelligence Research Institute’s effectiveness, and other topics. We also met with several potential researchers to gauge their interest and abilities.
  • Relaunched the Visiting Fellows program: In January we relaunched our Visiting Fellows program. Instead of hosting many visiting fellows at once, we will now host only 1-2 fellows at a time, for a limited duration unique to each visiting fellow. Our visiting fellow for the last week of January was Princeton philosophy undergraduate Jake Nebel. If you’re interested, please apply to our Visiting Fellows program here.
  • Much more: Launched a redesign of HPMoR.com, continued work in the optimal philanthropy movement, continued work on our first annual report, and much more.

Finally, we’d like to recognize our most active volunteers in January 2012: Mitchell Owen, Brian Rabkin, Huon Wilson, David Althaus, Florent Berthet, Sergio Terrero, “Lightwave,” Emile Kroeger, and Giles Edkins. Thanks everyone! (And, our apologies if we forgot to name you!)

Machine Intelligence Research Institute Progress Report, December 2011

 |   |  News

“I think the Machine Intelligence Research Institute has some very smart people working on the most important mission on Earth, but… what exactly are they doing these days? I’m in the dark.”

There’s a good reason I hear this comment so often. We haven’t done a good job of communicating our progress to our supporters.

Since being appointed Executive Director of the Machine Intelligence Research Institute (SI) in November, I’ve been working to change that. I gave two Q&As about SI and explained our research program with a list of open problems in AI risk research. Now, I’d like to introduce our latest effort in transparency: monthly progress reports. Read more »

Q&A #2 with Luke Muehlhauser, Machine Intelligence Research Institute Executive Director

 |   |  News

Machine Intelligence Research Institute Activities

Bugmaster asks:

…what does the SIAI actually do? You don’t submit your work to rigorous scrutiny by your peers in the field… you either aren’t doing any AGI research, or are keeping it so secret that no one knows about it… and you aren’t developing any practical applications of AI, either… So, what is it that you are actually working on, other than growing the SIAI itself ?

It’s a good question, and my own biggest concern right now. Donors would like to know: Where is the visible return on investment? How can I see that I’m buying existential risk reduction when I donate to the Machine Intelligence Research Institute? Read more »

Interview with New MIRI Research Fellow Luke Muehlhauser

 |   |  Conversations

Section One: Background and Core Ideas

Q1. What is your personal background?
Q2. Why should we care about artificial intelligence?
Q3. Why do you think smarter-than-human artificial intelligence is possible?
Q4. The mission of the Machine Intelligence Research Institute is to “to ensure that the creation of smarter-than- human intelligence benefits society.” How is your research contributing to that mission?
Q5. How does MIRI’s approach to making friendly AI differ from the concept of Asimov laws?
Q6. Why is it necessary to make an AI that “€œwants the same things we want”?
Q7. If dangerous AI were to develop, why couldn’€™t we just “€œpull the plug”€?
Q8. Why are you and the Machine Intelligence Research Institute focused on artificial intelligence instead of human intelligence enhancement or whole brain emulation?

 

Read more »