MIRI’s November 2013 Workshop in Oxford

Posted by & filed under News.

From November 23-29, MIRI will host another Workshop on Logic, Probability, and Reflection, for the first time in Oxford, UK. Participants will investigate problems related to reflective agents, probabilistic logic, and priors over logical statements / the logical omniscience problem. Participants confirmed so far include: Stuart Armstrong (Oxford) Mihaly Barasz (Google) Catrin Campbell-Moore (LMU Munich) Daniel Dewey (Oxford) Benja… Read more »

Transparency in Safety-Critical Systems

Posted by & filed under Analysis.

In this post, I aim to summarize one common view on AI transparency and AI reliability. It’s difficult to identify the field’s “consensus” on AI transparency and reliability, so instead I will present a common view so that I can use it to introduce a number of complications and open questions that (I think) warrant… Read more »

Holden Karnofsky on Transparent Research Analyses

Posted by & filed under Conversations.

Holden Karnofsky is the co-founder of GiveWell, which finds outstanding giving opportunities and publishes the full details of its analysis to help donors decide where to give. GiveWell tracked ~$9.6 million in donations made on the basis of its recommendations in 2012. It has historically sought proven, cost-effective, scalable giving opportunities, but its new initiative,… Read more »

2013 Summer Matching Challenge Completed!

Posted by & filed under News.

Thanks to the generosity of dozens of donors, on August 15th we successfully completed the largest fundraiser in MIRI’s history. All told, we raised $400,000, which will fund our research going forward. This fundraiser came “right down to the wire.” At 8:45pm Pacific time, with only a few hours left before the deadline, we announced on… Read more »

What is AGI?

Posted by & filed under Analysis.

One of the most common objections we hear when talking about artificial general intelligence (AGI) is that “AGI is ill-defined, so you can’t really say much about it.” In an earlier post, I pointed out that we often don’t have precise definitions for things while doing useful work on them, as was the case with… Read more »

Benja Fallenstein on the Löbian Obstacle to Self-Modifying Systems

Posted by & filed under Conversations.

Benja Fallenstein researches mathematical models of human and animal behavior at Bristol University, as part of the MAD research group and the decision-making research group. Before that, she graduated from University of Vienna with a BSc in Mathematics. In her spare time, Benja studies questions relevant to AI impacts and Friendly AI, including: AI forecasting,… Read more »

“Algorithmic Progress in Six Domains” Released

Posted by & filed under Papers.

Today we released a new technical report by visiting researcher Katja Grace called “Algorithmic Progress in Six Domains.” The report summarizes data on algorithmic progress – that is, better performance per fixed amount of computing hardware – in six domains: SAT solvers, Chess and Go programs, Physics simulations, Factoring, Mixed integer programming, and Some forms of… Read more »

AI Risk and the Security Mindset

Posted by & filed under Analysis.

In 2008, security expert Bruce Schneier wrote about the security mindset: Security requires a particular mindset. Security professionals… see the world differently. They can’t walk into a store without noticing how they might shoplift. They can’t use a computer without wondering about the security vulnerabilities. They can’t vote without trying to figure out how to… Read more »