New Paper: “Predicting AGI: What can we say when we know so little?”

 |   |  Papers

Predicting AGIMIRI research associate Benja Fallenstein and UC Berkeley student Alex Mennen have released a new working paper titled “Predicting AGI: What can we say when we know so little?

From the introduction:

This analysis does not attempt to predict when AGI will actually be achieved, but instead, to predict when this epistemic state with respect to AGI will change, such that we will have a clear idea of how much further progress is needed before we reach AGI. Metaphorically speaking, instead of predicting when AI takes off, we predict when it will start taxiing to the runway.

The paper argues for a Pareto distribution for “time to taxi,” and concludes:

in general, a Pareto distribution suggests that we should put a much greater emphasis on short-term strategies than a less skewed distribution (e.g. a normal distribution) with the same median would.

New Paper: “Racing to the Precipice”

 |   |  Papers

Racing to the edgeDuring his time as a MIRI research fellow, Carl Shulman contributed to a paper now available as an FHI technical report: Racing to the Precipice: a Model of Artificial Intelligence Development.

Abstract:

This paper presents a simple model of an AI arms race, where several development teams race to build the first AI. Under the assumption that the first AI will be very powerful and transformative, each team is incentivized to finish first — by skimping on safety precautions if need be. This paper presents the Nash equilibrium of this process, where each team takes the correct amount of safety precautions in the arms race. Having extra development teams and extra enmity between teams can increase the danger of an AI-disaster, especially if risk taking is more important than skill in developing the AI. Surprisingly, information also increases the risks: the more teams know about each others’ capabilities (and about their own), the more the danger increases.

Update: As of July 2015, this paper has been published in the journal AI & Society.

MIRI’s November 2013 Newsletter

 |   |  Newsletters

 

Machine Intelligence Research Institute

Dear friends,
We’re experimenting with a new, ultra-brief newsletter style. To let us know what you think of it, simply reply to this email. Thanks!

News Updates

  • You can now support MIRI for free by shopping at smile.amazon.com instead of amazon.com. Update your bookmarks!
  • Louie Helm will be onsite for the Nov. 24th Marin county screening of Doug Wolens’ new documentary The Singularity. Details on this and other screenings are here.
Research Updates
Other Updates
  • Many of our friends have said Louie Helm’s Rockstar Research is among their favorite new sources of news; check it out!
  • Know an exceptionally bright, ambitious person younger than 20? Tell them to apply for The Thiel Fellowship! $100,000 to skip college and develop one’s skills and ideas, with an incredible network of mentors in the Bay Area.
  • CFAR has upcoming rationality workshops in February (Melbourne), March (Bay Area), and April (NYC). Tell your friends!
Cheers,
Luke Muehlhauser
Executive Director


Support MIRI by Shopping at AmazonSmile

 |   |  News

Support MIRI through AmazonSmile.If you shop at the new AmazonSmile, Amazon donates 0.5% of the price of your eligible purchases to a charitable organization of your choosing.

MIRI is an eligible charitable organization, so the next time you consider purchasing something through Amazon, support MIRI by shopping at AmazonSmile!

If you get to Amazon.com via an affiliate link, remember to change “amazon.com” to “smile.amazon.com” in the address bar before making your purchase. Or, even easier, use the SmileAlways Chrome extension.

Greg Morrisett on Secure and Reliable Systems

 |   |  Conversations

Greg Morrisett portrait Greg Morrisett is the Allen B. Cutting Professor of Computer Science at Harvard University. He received his B.S. in Mathematics and Computer Science from the University of Richmond in 1989, and his Ph.D. from Carnegie Mellon in 1995. In 1996, he took a position at Cornell University, and in the 2003-04 academic year, he took a sabbatical and visited the Microsoft European Research Laboratory. In 2004, he moved to Harvard, where he has served as Associate Dean for Computer Science and Engineering, and where he currently heads the Harvard Center for Research on Computation and Society.

Morrisett has received a number of awards for his research on programming languages, type systems, and software security, including a Presidential Early Career Award for Scientists and Engineers, an IBM Faculty Fellowship, an NSF Career Award, and an Alfred P. Sloan Fellowship.

He served as Chief Editor for the Journal of Functional Programming and as an associate editor for ACM Transactions on Programming Languages and Systems and Information Processing Letters. He currently serves on the editorial board for The Journal of the ACM and as co-editor-in-chief for the Research Highlights column of Communications of the ACM. In addition, Morrisett has served on the DARPA Information Science and Technology Study (ISAT) Group, the NSF Computer and Information Science and Engineering (CISE) Advisory Council, Microsoft Research’s Technical Advisory Board, and Microsoft’s Trusthworthy Computing Academic Advisory Board.

Luke Muehlhauser: One of the interesting projects in which you’re involved is SAFE, a DARPA-funded project “focused on a clean slate design for resilient and secure systems.” What is the motivation for this project, and in particular for its “clean slate” approach?

Read more »

From Philosophy to Math to Engineering

 |   |  Analysis

For centuries, philosophers wondered how we could learn what causes what. Some argued it was impossible, or possible only via experiment. Others kept hacking away at the problem, clarifying ideas like counterfactual and probability and correlation by making them more precise and coherent.

Then, in the 1990s, a breakthrough: Judea Pearl and others showed that, in principle, we can sometimes infer causal relations from data even without experiment, via the mathematical machinery of probabilistic graphical models.

Next, engineers used this mathematical insight to write software that can, in seconds, infer causal relations from a data set of observations.

Across the centuries, researchers had toiled away, pushing our understanding of causality from philosophy to math to engineering.

From Philosophy to Math to Engineering (small)

And so it is with Friendly AI research. Current progress on each sub-problem of Friendly AI lies somewhere on a spectrum from philosophy to math to engineering.

Read more »

Robin Hanson on Serious Futurism

 |   |  Conversations

Robin Hanson is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He is known as a founder of the field of prediction markets, and was a chief architect of the Foresight Exchange, DARPA’s FutureMAP, IARPA’s DAGGRE, SciCast, and is chief scientist at Consensus Point. He started the first internal corporate prediction market at Xanadu in 1990, and invented the widely used market scoring rules. He also studies signaling and information aggregation, and is writing a book on the social implications of brain emulations. He blogs at Overcoming Bias.

Hanson received a B.S. in physics from the University of California, Irvine in 1981, an M.S. in physics and an M.A. in Conceptual Foundations of Science from the University of Chicago in 1984, and a Ph.D. in social science from Caltech in 1997. Before getting his Ph.D he researched artificial intelligence, Bayesian statistics and hypertext publishing at Lockheed, NASA and elsewhere.

Luke Muehlhauser: In an earlier blog post, I wrote about the need for what I called AGI impact experts who “develop skills related to predicting technological development, predicting AGI’s likely impact on society, and identifying which interventions are most likely to increase humanity’s chances of safely navigating the creation of AGI.”

In 2009, you gave a talk called, “How does society identify experts and when does it work?” Given the study that you’ve done and the expertise, what do you think of humanity’s prospects for developing these AGI impact experts? If they are developed, do you think society will be able to recognize who is an expert and who is not?

Read more »

New Paper: “Embryo Selection for Cognitive Enhancement”

 |   |  Papers

IES first pageDuring his time as a MIRI research fellow, Carl Shulman co-authored (with Nick Bostrom) a paper that is now available as a preprint, titled “Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer?

Abstract:

Human capital is a key determinant of personal and national outcomes, and a major input to scientific progress. It has been suggested that advances in genomics will make it possible to enhance human intellectual abilities. One way to do this would be via embryo selection in the context of in vitro fertilization (IVF). In this article, we analyze the feasibility, timescale, and possible societal impacts of embryo selection for cognitive enhancement. We find that embryo selection, on its own, may have significant impacts, but likely not drastic ones, over the next 50 years, though large effects could accumulate over multiple generations. However, there is a complementary technology, stem cell-derived gametes, which has been making rapid progress and which could amplify the impact of embryo selection, enabling very large changes if successfully applied to humans.

The last sentence refers to “iterated embryo selection” (IES), a future technology first described by MIRI in 2009. This technology has significant strategic relevance for Friendly AI (FAI) development because it might be the only intelligence amplification (IA) technology (besides WBE) to have large enough effects on human intelligence to substantially shift our odds of getting FAI before arbitrary AGI, if AGI is developed sometime this century.

Unfortunately, it remains unclear whether the arrival of IES would shift our FAI chances positively or negatively. On the one hand, a substantially smarter humanity may be wiser, and more likely to get FAI right. On the other hand, IES might accelerate AGI relative to FAI, since AGI is more parallelizable than FAI. (For more detail, and more arguments pointing in both directions, see Intelligence Amplification and Friendly AI.)