White House submissions and report on AI safety
In May, the White House Office of Science and Technology Policy (OSTP) announced “a new series of workshops and an interagency working group to learn more about the benefits and risks of artificial intelligence.” They hosted a June Workshop on Safety and Control for AI (videos), along with three other workshops, and issued a general request for information on AI (see MIRI’s primary submission here).
The OSTP has now released a report summarizing its conclusions, “Preparing for the Future of Artificial Intelligence,” and the result is very promising. The OSTP acknowledges the ongoing discussion about AI risk, and recommends “investing in research on longer-term capabilities and how their challenges might be managed”:
General AI (sometimes called Artificial General Intelligence, or AGI) refers to a notional future AI system that exhibits apparently intelligent behavior at least as advanced as a person across the full range of cognitive tasks. A broad chasm seems to separate today’s Narrow AI from the much more difficult challenge of General AI. Attempts to reach General AI by expanding Narrow AI solutions have made little headway over many decades of research. The current consensus of the private-sector expert community, with which the NSTC Committee on Technology concurs, is that General AI will not be achieved for at least decades.14
People have long speculated on the implications of computers becoming more intelligent than humans. Some predict that a sufficiently intelligent AI could be tasked with developing even better, more intelligent systems, and that these in turn could be used to create systems with yet greater intelligence, and so on, leading in principle to an “intelligence explosion” or “singularity” in which machines quickly race far ahead of humans in intelligence.15
In a dystopian vision of this process, these super-intelligent machines would exceed the ability of humanity to understand or control. If computers could exert control over many critical systems, the result could be havoc, with humans no longer in control of their destiny at best and extinct at worst. This scenario has long been the subject of science fiction stories, and recent pronouncements from some influential industry leaders have highlighted these fears.
A more positive view of the future held by many researchers sees instead the development of intelligent systems that work well as helpers, assistants, trainers, and teammates of humans, and are designed to operate safely and ethically.
The NSTC Committee on Technology’s assessment is that long-term concerns about super-intelligent General AI should have little impact on current policy. The policies the Federal Government should adopt in the near-to-medium term if these fears are justified are almost exactly the same policies the Federal Government should adopt if they are not justified. The best way to build capacity for addressing the longer-term speculative risks is to attack the less extreme risks already seen today, such as current security, privacy, and safety risks, while investing in research on longer-term capabilities and how their challenges might be managed. Additionally, as research and applications in the field continue to mature, practitioners of AI in government and business should approach advances with appropriate consideration of the long-term societal and ethical questions – in additional to just the technical questions – that such advances portend. Although prudence dictates some attention to the possibility that harmful superintelligence might someday become possible, these concerns should not be the main driver of public policy for AI.
Later, the report discusses “methods for monitoring and forecasting AI developments”:
One potentially useful line of research is to survey expert judgments over time. As one example, a survey of AI researchers found that 80 percent of respondents believed that human-level General AI will eventually be achieved, and half believed it is at least 50 percent likely to be achieved by the year 2040. Most respondents also believed that General AI will eventually surpass humans in general intelligence.50 While these particular predictions are highly uncertain, as discussed above, such surveys of expert judgment are useful, especially when they are repeated frequently enough to measure changes in judgment over time. One way to elicit frequent judgments is to run “forecasting tournaments” such as prediction markets, in which participants have financial incentives to make accurate predictions.51 Other research has found that technology developments can often be accurately predicted by analyzing trends in publication and patent data52. […]
When asked during the outreach workshops and meetings how government could recognize milestones of progress in the field, especially those that indicate the arrival of General AI may be approaching, researchers tended to give three distinct but related types of answers:
1. Success at broader, less structured tasks: In this view, the transition from present Narrow AI to an eventual General AI will occur by gradually broadening the capabilities of Narrow AI systems so that a single system can cover a wider range of less structured tasks. An example milestone in this area would be a housecleaning robot that is as capable as a person at the full range of routine housecleaning tasks.
2. Unification of different “styles” of AI methods: In this view, AI currently relies on a set of separate methods or approaches, each useful for different types of applications. The path to General AI would involve a progressive unification of these methods. A milestone would involve finding a single method that is able to address a larger domain of applications that previously required multiple methods.
3. Solving specific technical challenges, such as transfer learning: In this view, the path to General AI does not lie in progressive broadening of scope, nor in unification of existing methods, but in progress on specific technical grand challenges, opening up new ways forward. The most commonly cited challenge is transfer learning, which has the goal of creating a machine learning algorithm whose result can be broadly applied (or transferred) to a range of new applications.
The report also discusses the open problems outlined in “Concrete Problems in AI Safety” and cites the MIRI paper “The Errors, Insights and Lessons of Famous AI Predictions – and What They Mean for the Future.”
In related news, Barack Obama recently answered some questions about AI risk and Nick Bostrom’s Superintelligence in a Wired interview. After saying that “we’re still a reasonably long way away” from general AI (video) and that his directive to his national security team is to worry more about near-term security concerns (video), Obama adds:
Now, I think, as a precaution — and all of us have spoken to folks like Elon Musk who are concerned about the superintelligent machine — there’s some prudence in thinking about benchmarks that would indicate some general intelligence developing on the horizon. And if we can see that coming, over the course of three decades, five decades, whatever the latest estimates are — if ever, because there are also arguments that this thing’s a lot more complicated than people make it out to be — then future generations, or our kids, or our grandkids, are going to be able to see it coming and figure it out.