Nick Bostrom to speak about Superintelligence at UC Berkeley

 |   |  News

Bostrom looking up

MIRI has arranged for Nick Bostrom to discuss his new book — Superintelligence: Paths, Dangers, Strategies — on the UC Berkeley campus on September 12th.

Bostrom is the director of the Future of Humanity Institute at Oxford University, and is a frequent collaborator with MIRI researchers (e.g. see “The Ethics of Artificial Intelligence“). He is the author of some 200 publications, and is best known for his work in five areas:  (1) existential risk; (2) the simulation argument; (3) anthropics; (4) the impacts of future technology; and (5) the implications of consequentialism for global strategy. Earlier this year he was included on Prospect magazine’s World Thinkers list, the youngest person in the top 15 from all fields and the highest-ranked analytic philosopher.

Bostrom will be introduced by UC Berkeley professor Stuart Russell, co-author of the world’s leading AI textbook. Russell’s blurb for Superintelligence reads:

Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era.

The talk will begin at 7pm at room 310 (Banatao Auditorium) in Sutardja Dai Hall (map) on the UC Berkeley campus.

If you live nearby, we hope to see you there! The room seats 150 people, on a first-come basis.

There will also be copies of Superintelligence available for purchase.

Nick Bostrom Event Image map

 

2014 Summer Matching Challenge!

 |   |  News

Nate & Nisan

Thanks to the generosity of several major donors, every donation made to MIRI between now and August 15th, 2014 will be matched dollar-for-dollar, up to a total of $200,000!

 

$0

$50K

$100K

$150K

$200K

We have reached our matching total of $200,000!

116

Total Donors

 

Now is your chance to double your impact while helping us raise up to $400,000 (with matching) to fund our research program.

Corporate matching and monthly giving pledges will count towards the total! Please email malo@intelligence.org if you intend on leveraging corporate matching (check here, to see if your employer will match your donation) or would like to pledge 6 months of monthly donations, so that we can properly account for your contributions towards the fundraiser.

(If you’re unfamiliar with our mission, see: Why MIRI?)

 

Accomplishments Since Our Winter 2013 Fundraiser Launched:

Ongoing Activities You Can Help Support

  • We’re writing an overview of the Friendly AI technical agenda (as we see it) so far.
  • We’re currently developing and testing several tutorials on different pieces of the Friendly AI technical agenda (tiling agents, modal agents, etc.).
  • We’re writing several more papers and reports.
  • We’re growing the MIRIx program, largely to grow the pool of people we can plausibly hire as full-time FAI researchers in the next couple years.
  • We’re planning, or helping to plan, multiple research workshops, including the May 2015 decision theory workshop at Cambridge University.
  • We’re finishing the editing for a book version of Eliezer’s Sequences.
  • We’re helping to fund further SPARC activity, which provides education and skill-building to elite young math talent, and introduces them to ideas like effective altruism and global catastrophic risks.
  • We’re continuing to discuss formal collaboration opportunities with UC Berkeley faculty and development staff.
  • We’re helping Nick Bostrom promote his Superintelligence book in the U.S.
  • We’re investigating opportunities for supporting Friendly AI research via federal funding sources such as the NSF.

Other projects are still being surveyed for likely cost and impact. See also our mid-2014 strategic plan.

We appreciate your support for our work! Donate now, and seize a better than usual opportunity to move our work forward. If you have questions about donating, please contact Malo Bourgon at malo@intelligence.org.

 $200,000 of total matching funds has been provided by Jaan Tallinn, Edwin Evans, and Rick Schwall.

May 2015 decision theory conference at Cambridge University

 |   |  News

MIRI, CSER, and the philosophy department at Cambridge University are co-organizing a decision theory conference titled Self-Prediction in Decision Theory and AI, to be held in the Faculty of Philosophy at the Cambridge University. The dates are May 13-19, 2015.

Huw Price and Arif Ahmed at Cambridge University are the lead organizers.

Confirmed speakers, in the order they are scheduled to speak, are:

(Updated May 17, 2015.)

MIRI’s July 2014 newsletter

 |   |  Newsletters

Machine Intelligence Research Institute

Research Updates

News Updates

  • We’ve released our mid-2014 strategic plan update.
  • There are currently six active MIRIx groups around the world. If you’re a mathematician, computer scientist, or formal philosopher, you may want to attend one of these groups, or apply for funding to run your own independently-organized MIRIx workshop!
  • Luke and Eliezer will be giving talks at the Effective Altruism Summit.
  • We are actively hiring for four positions: research fellow, science writer, office manager, and director of development. Salaries + benefits are competitive, visa assistance available if needed.

Other Updates

As always, please don’t hesitate to let us know if you have any questions or comments.

Best,
Luke Muehlhauser
Executive Director

You’re receiving this because you subscribed to the MIRI newsletter.

unsubscribe from this list | update subscription preferences

 

New report: “Non-omniscience, probabilistic inference, and metamathematics”

 |   |  Papers

Non-OmniscienceUC Berkeley student and MIRI research associate Paul Christiano has released a new report: “Non-omniscience, probabilistic inference, and metamathematics.”

Abstract:

We suggest a tractable algorithm for assigning probabilities to sentences of first-order logic and updating those probabilities on the basis of observations. The core technical difficulty is relaxing the constraints of logical consistency in a way that is appropriate for bounded reasoners, without sacrificing the ability to make useful logical inferences or update correctly on evidence.

Using this framework, we discuss formalizations of some issues in the epistemology of mathematics. We show how mathematical theories can be understood as latent structure constraining physical observations, and consequently how realistic observations can provide evidence about abstract mathematical facts. We also discuss the relevance of these ideas to general intelligence.

What is the relation between this new report and Christiano et al.’s earlier “Definability of truth in probabilistic logic” report, discussed by John Baez here? In this new report, Paul aims to take a broader look at the interaction between probabilistic reasoning and epistemological issues, from an algorithmic perspective, before continuing to think about reflection and truth in particular.

Roger Schell on long-term computer security research

 |   |  Conversations

Roger R. Schell portrait Roger R. Schell is a Professor of Engineering Practice at the University Of Southern California Viterbi School Of Engineering, and a member of the founding faculty for their Masters of Cyber Security degree program. He is internationally recognized for originating several key security design and evaluation techniques, and he holds patents in cryptography, authentication and trusted workstation. For more than decade he has been co-founder and an executive of Aesec Corporation, a start-up company providing verifiably secure platforms. Previously Prof. Schell was the Corporate Security Architect for Novell, and co-founder and vice president for Gemini Computers, Inc., where he directed development of their highly secure (what NSA called “Class A1”) commercial product, the Gemini Multiprocessing Secure Operating System (GEMSOS). He was also the founding Deputy Director of NSA’s National Computer Security Center. He has been referred to as the “father” of the Trusted Computer System Evaluation Criteria (the “Orange Book”). Prof. Schell is a retired USAF Colonel. He received a Ph.D. in Computer Science from the MIT, an M.S.E.E. from Washington State, and a B.S.E.E. from Montana State. The NIST and NSA have recognized him with the National Computer System Security Award. In 2012 he was inducted into the inaugural class of the National Cyber Security Hall of Fame.

Read more »

New chapter in Cambridge Handbook of Artificial Intelligence

 |   |  Papers

cambridge handbook of AIThe Cambridge Handbook of Artificial Intelligence has been released. It contains a chapter co-authored by Nick Bostrom (Oxford) and Eliezer Yudkowsky (MIRI) called “The Ethics of Artificial Intelligence,” available in PDF here.

The abstract reads:

The possibility of creating thinking machines raises a host of ethical issues. These questions relate both to ensuring that such machines do not harm humans and other morally relevant beings, and to the moral status of the machines themselves. The first section discusses issues that may arise in the near future of AI. The second section outlines challenges for ensuring that AI operates safely as it approaches humans in its intelligence. The third section outlines how we might assess whether, and in what circumstances, AIs themselves have moral status. In the fourth section, we consider how AIs might differ from humans in certain basic respects relevant to our ethical assessment of them. The final section addresses the issues of creating AIs more intelligent than human, and ensuring that they use their advanced intelligence for good rather than ill.

Our mid-2014 strategic plan

 |   |  MIRI Strategy

Summary

Events since MIRI’s April 2013 strategic plan have increased my confidence that we are “headed in the right direction.” During the rest of 2014 we will continue to:

  • Decrease our public outreach efforts, leaving most of that work to FHI at Oxford, CSER at Cambridge, FLI at MIT, Stuart Russell at UC Berkeley, and others (e.g. James Barrat).
  • Finish a few pending “strategic research” projects, then decrease our efforts on that front, again leaving most of that work to FHI, plus CSER and FLI if they hire researchers, plus some others.
  • Increase our investment in our Friendly AI (FAI) technical research agenda.

The reasons for continuing along this path remain largely the same, but I have more confidence in it now than I did before. This is because, since April 2013:

  • We produced much Friendly AI research progress on many different fronts, and do not remotely feel like we’ve exhausted the progress that could be made if we had more researchers, demonstrating that the FAI technical agenda is highly tractable.
  • FHI, CSER, and FLI have had substantial public outreach success, in part by leveraging their university affiliations and impressive advisory boards.
  • We’ve heard that as a result of this outreach success, and also because of Stuart Russell’s discussions with researchers at AI conferences, AI researchers are beginning to ask, “Okay, this looks important, but what is the technical research agenda? What could my students and I do about it?” Basically, they want to see an FAI technical agenda, and MIRI is is developing that technical agenda already (see below).

In short, I think we tested and validated MIRI’s new strategic focus, and now it is time to scale. Thus, our top goals for the next 6-12 months are to:

  1. Produce more Friendly AI research.
  2. Recruit more Friendly AI researchers.
  3. Fundraise heavily to support those activities.

Read more »