Blog

Category: Analysis

AI Alignment: Why It’s Hard, and Where to Start

Back in May, I gave a talk at Stanford University for the Symbolic Systems Distinguished Speaker series, titled “The AI Alignment Problem: Why It’s Hard, And Where To Start.” The video for this talk is now available on Youtube:  ...

Safety engineering, target selection, and alignment theory

Artificial intelligence capabilities research is aimed at making computer systems more intelligent — able to solve a wider range of problems more effectively and efficiently. We can distinguish this from research specifically aimed at making AI systems at various capability...

The need to scale MIRI’s methods

Andrew Critch, one of the new additions to MIRI’s research team, has taken the opportunity of MIRI’s winter fundraiser to write on his personal blog about why he considers MIRI’s work important. Some excerpts: Since a team of CFAR alumni...

AI and Effective Altruism

MIRI is a research nonprofit specializing in a poorly-explored set of problems in theoretical computer science. GiveDirectly is a cash transfer service that gives money to poor households in East Africa. What kind of conference would bring together representatives from...

Powerful planners, not sentient software

Over the past few months, some major media outlets have been spreading concern about the idea that AI might spontaneously acquire sentience and turn against us. Many people have pointed out the flaws with this notion, including Andrew Ng, an...

What Sets MIRI Apart?

Last week, we received several questions from the effective altruist community in response to our fundraising post. Here’s Maxwell Fritz: […] My snap reaction to MIRI’s pitches has typically been, “yeah, AI is a real concern. But I have no...