Blog

Category: Analysis

Richard Posner on AI Dangers

Richard Posner is a jurist, legal theorist, and economist. He is also the author of nearly 40 books, and is by far the most-cited legal scholar of the 20th century. In 2004, Posner published Catastrophe: Risk and Response, in which...

Mathematical Proofs Improve But Don’t Guarantee Security, Safety, and Friendliness

In 1979, Michael Rabin proved that his encryption system could be inverted — so as to decrypt the encrypted message — only if an attacker could factor n. And since this factoring task is computationally hard for any sufficiently large...

How well will policy-makers handle AGI? (initial findings)

MIRI’s mission is “to ensure that the creation of smarter-than-human intelligence has a positive impact.” One policy-relevant question is: How well should we expect policy makers to handle the invention of AGI, and what does this imply about how much...

How effectively can we plan for future decades? (initial findings)

MIRI aims to do research now that increases humanity’s odds of successfully managing important AI-related events that are at least a few decades away. Thus, we’d like to know: To what degree can we take actions now that will predictably...

Transparency in Safety-Critical Systems

In this post, I aim to summarize one common view on AI transparency and AI reliability. It’s difficult to identify the field’s “consensus” on AI transparency and reliability, so instead I will present a common view so that I can...

What is AGI?

One of the most common objections we hear when talking about artificial general intelligence (AGI) is that “AGI is ill-defined, so you can’t really say much about it.” In an earlier post, I pointed out that we often don’t have...