Blog

Category: Analysis

Three misconceptions in Edge.org’s conversation on “The Myth of AI”

A recent Edge.org conversation — “The Myth of AI” — is framed in part as a discussion of points raised in Bostrom’s Superintelligence, and as a response to much-repeated comments by Elon Musk and Stephen Hawking that seem to have...

The Financial Times story on MIRI

Richard Waters wrote a story on MIRI and others for Financial Times, which also put Nick Bostrom’s Superintelligence at the top of its summer science reading list. It’s a good piece. Go read it and then come back here so...

AGI outcomes and civilizational competence

The [latest IPCC] report says, “If you put into place all these technologies and international agreements, we could still stop warming at [just] 2 degrees.” My own assessment is that the kinds of actions you’d need to do that are...

Groundwork for AGI safety engineering

Improvements in AI are resulting in the automation of increasingly complex and creative human behaviors. Given enough time, we should expect artificial reasoners to begin to rival humans in arbitrary domains, culminating in artificial general intelligence (AGI). A machine would...

Exponential and non-exponential trends in information technology

Co-authored with Lila Rieber. In The Singularity is Near, Ray Kurzweil writes that “every aspect of information and information technology is growing at an exponential pace.” ((Page 85. In the same book, he also writes that “we see ongoing exponential...

The world’s distribution of computation (initial findings)

  What is the world’s current distribution of computation, and what will it be in the future? This question is relevant to several issues in AGI safety strategy. To name just three examples: If a large government or corporation wanted...