October 2024 newsletter
News and links
- Geoffrey Hinton and John Hopfield were awarded this year’s Nobel Prize in Physics for their foundational contributions to machine learning. In a press conference following the announcement, Hinton said that “if you look around, there are very few examples of more intelligent things being controlled by less intelligent things, which makes you wonder whether when AI gets smarter than us, it’s going to take over control.”
- In “A Narrow Path,” a group of AI policy researchers outlines a set of proposals for avoiding human extinction from AI.Their plan involves preventing smarter-than-human systems from being developed within the next twenty years. We are skeptical about instituting a moratorium for a fixed amount of time, rather than halting until humanity is truly ready to proceed, but we appreciate that they chose a duration measured in decades rather than months.
- In “Machines of Loving Grace,” Anthropic CEO Dario Amodei explores the potential benefits of smarter-than-human AI and also briefly argues that a coalition of democracies should race to quickly build it. Responding with his own essay, Max Tegmark argues that “from a game-theoretic point of view, this race is not an arms race but a suicide race[…] Because we are closer to building AGI than we are to figuring out how to align or control it.” We agree with Tegmark’s point here.
- California Governor Gavin Newsom vetoed SB 1047, the bill which would have mandated risk assessments for some AI developers. Unlike most of the bill’s critics, Newsom argued that the bill actually didn’t go far enough: “By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology.” In “SB 1047: Our Side Of The Story,” Scott Alexander strongly challenges this rationale and reflects on the bill’s rise and fall.
- OpenAI announced o1, a model trained with reinforcement learning to do more complex reasoning than previous systems. The model was shared with red teaming organizations, but only one week before it was released. One of the organizations, Apollo Research, found that o1 acts deceptively in some situations.
- There was recently a string of new research and discussion related to how the ability of language models to predict the future compares to that of humans:
- Researchers at CAIS claimed that, given the right prompt, GPT-4o could forecast at a “superhuman level.”
- Forecasting platform Metaculus ran its own study which found that “no AI has demonstrated superhuman forecasting skill yet.”
- Another group of researchers introduced a forecasting benchmark and used it to find that AI systems can forecast as well as random survey-takers, but not as well as expert forecasters.
You can subscribe to the MIRI Newsletter here.