New eBook: ‘Smarter Than Us’

 |   |  News

SmarterThanUsCover-Angled-200pxWe are pleased to release a new ebook, commissioned by MIRI and written by Oxford University’s Stuart Armstrong, and available in EPUB, MOBI, PDF, and from the Amazon and Apple ebook stores.

What happens when machines become smarter than humans? Forget lumbering Terminators. The power of an artificial intelligence (AI) comes from its intelligence, not physical strength and laser guns. Humans steer the future not because we’re the strongest or the fastest but because we’re the smartest. When machines become smarter than humans, we’ll be handing them the steering wheel. What promises—and perils—will these powerful machines present? Stuart Armstrong’s new book navigates these questions with clarity and wit.

Head over to intelligence.org/smarter-than-us/ to grab a copy.

André Platzer on Verifying Cyber-Physical Systems

 |   |  Conversations

André Platzer portraitAndré Platzer is an Assistant Professor of Computer Science at Carnegie Mellon University. He develops the logical foundations of cyber-physical systems to characterize their fundamental principles and to answer the question how we can trust a computer to control physical processes. He received an M.Sc. from the University of Karlsruhe (TH), Germany, in 2004 and a Ph.D. in Computer Science from the University of Oldenburg, Germany, in 2008, after which he joined CMU as an Assistant Professor.

Dr. Platzer received a number of awards for his research on logic of dynamical systems, cyber-physical systems, programming languages, and theorem proving, including an NSF CAREER Award and ACM Doctoral Dissertation Honoroable Mention Award. He was also named one of the Brilliant 10 Young Scientists by the Popular Science magazine and one of the AI’s 10 to Watch by the IEEE Intelligent Systems magazine.

Highlights of Platzer’s thoughts, from the interview below:

  • KeYmaera is a formal verification tool for cyber-physical systems. It has seen many real-world applications, e.g. verifying non-collision in the European Train Control System.
  • KeYmaera’s capability comes from Platzer’s “differential dynamic logic,” which builds on earlier work in (e.g.) dynamic logic.
  • Contra Levitt, Platzer isn’t convinced that autonomous robots will need to use decision theory, though they may.
  • Platzer contrasts formal verification (“did I build the system right?”) with formal validation (“did I build the right system?”). One way to make progress on the latter is to develop what Rushby called “deep assumption tracking.”

Read more »

Gerwin Klein on Formal Methods

 |   |  Conversations

Gerwin Klein portraitGerwin Klein is a Senior Principal Researcher at NICTA, Australia’s National Centre of Excellence for ICT Research, and Conjoint Associate Professor at the University of New South Wales in Sydney, Australia. He is leading NICTA’s Formal Methods research discipline and was the leader of the L4.verified project that created the first machine-checked proof of functional correctness of a general-purpose microkernel in 2009. He joined NICTA in 2003 after receiving his PhD from Technische Universität München, Germany, where he formally proved type-safety of the Java Bytecode Verifier in the theorem prover Isabelle/HOL.

His research interests are in formal verification, programming languages, and low-level systems. Gerwin has won a number of awards together with his team, among them the 2011 MIT TR-10 award for the top ten emerging technologies world-wide, NICTA’s Richard E. Newton impact award for the kernel verification work, the best paper award from SOSP’09 for the same, and an award for the best PhD thesis in Germany in 2003 for his work on bytecode verification. When he is not proving theorems and working on trustworthy software, he enjoys travelling and dabbling in martial arts and photography. Together with Tobias Nipkow he has just published an online draft of the text book Concrete Semantics that teaches programming language semantics using the Isabelle theorem prover.

Highlights of Klein’s thoughts, from the interview below:

  • Verifying code not designed for verification is very difficult and costly. Such “post-mortem verification” has other disadvantages as well.
  • Program designers can use abstraction, modularity, and explicit architecture choices to help make complex systems transparent to humans.
  • There are two ways probability can play a role in verification: (1) direct probabilistic reasoning, in the logic or in a setting where the program itself is probabilistic, or (2) standard non-probabilistic reasoning paired with subjectively uncertain reasoning about what a guarantee means for the overall probability that the system will work as intended.
  • Truly autonomous systems seem to be a long ways off, but if you could make a system with truly unexpected behavior, then you couldn’t make it safe.

Luke Muehlhauser: In your forthcoming paper “Comprehensive Formal Verification of an OS Microkernel,” you and your co-authors describe the kernel design (seL4) you used to ensure its verification would be tractable. You also discuss the process of keeping the formal proof current “as the requirements, design and implementation of the system [changed] over almost a decade.”

How “micro” is the seL4 microkernel? That is, what standard functionality does it provide, and what functionality (that is common in larger OS kernels) does it not provide?


Gerwin Klein: It is pretty micro: it is a true micro-kernel in the L4 tradition. This means it is very small compared to traditional monolithic OS kernels: Linux has on the order of a few million lines of code, seL4 has on the order of 10,000 lines of C. seL4 provides only the core mechanisms that you need to build an OS. These include threads and scheduling, virtual memory, interrupts and asynchronous signals, as well as synchronous message passing. The maybe most important feature of seL4 is its strong access control mechanism that can be used together with the rest to build isolated components with controlled communication between them. These components could be small isolated device drivers or an entire Linux guest OS, the developer can choose the granularity.

The philosophy is: if a mechanism strictly needs privileged hardware access to be implemented, it is in the kernel, but if it can be implemented using the other mechanisms as a user-level task, it is outside the kernel. A surprising amount of features can be outside: device drivers, for instance, graphics, files and file systems, network stacks, even disk paging is not provided by seL4, but can be implemented on top of it. What you (hopefully) get is that tricky complexity is concentrated in the kernel, and user-level OS components become more modular.

Read more »

Conversation with Jacob Steinhardt about MIRI Strategy

 |   |  Conversations, MIRI Strategy

On January 21st, 2014, MIRI met with Jacob Steinhardt to discuss MIRI strategy. Participants were:

We recorded and transcribed much of the conversation, and then edited and paraphrased the transcript for clarity, conciseness, and to protect the privacy of some content. The resulting edited transcript is available in full here (6 pages).

Topics discussed include research communication style, inferential distance, FAI research directions, and Knightian uncertainty.

2013 in Review: Strategic and Expository Research

 |   |  MIRI Strategy

This is the 3rd part of my personal and qualitative self-review of MIRI in 2013, in which I begin to review MIRI’s 2013 research activities. By “research activities” I mean to include outreach efforts primarily aimed at researchers, and also three types of research performed by MIRI:

I’ll review MIRI’s strategic and expository research in this post; my review of MIRI’s 2013 Friendly AI research will appear in a future post. For the rest of this post, I usually won’t try to distinguish which writings are “expository” vs. “strategic” research, since most of them are partially of both kinds.

Read more »


  1. Note that what I call “MIRI’s strategic research” or “superintelligence strategy research” is a superintelligence-focused subset of what GiveWell would call “strategic cause selection research” and CEA would call this “cause prioritization research.” 

MIRI’s Experience with Google Adwords

 |   |  MIRI Strategy

Adwords

In late 2011, MIRI opened a Google Grants account, which provides $10k/mo in free Google Adwords for nonprofits. Kevin Fischer and I tweaked our Adwords account over several months until we successfully spent the full $10,000/mo 3 months in a row.

This qualified us for Grants Pro, and we are still grandfathered into that (now unavailable) $40k/mo level of free Adwords. The limit is actually $1350/day which is challenging to spend wisely, even with the recently increased max bidding level of $2/click. But with more tweaking we are now able to spend nearly all of it each month. Kevin and I probably spent 100 hours between us over the past few years optimizing this, but much of it was done while we were both volunteering for MIRI. Ongoing tweaking requires only an hour or less of my time per month.

The traffic is large (2/3 of our total) but marginal in quality. In the past 6 months, we drove ~250,000 visitors to MIRI’s site via Google Adwords.

  • 5000 people read at least one of our research papers
  • 500 signed up for the newsletter (our true goal since it gives them a chance to hear from us again)
  • 150 went to the volunteer site (and almost all didn’t sign up once they got there)
  • 100 applied to attend research workshops (no qualified candidates yet)

Our impression is that MIRI has an especially hard time making good use of Google Adwords, because there is such a gulf of inferential distance between what we do and what people already know. Many things we could show someone who had never heard about us to try to have a strong impact would plausibly be more misleading than helpful. We expect most charities could make higher-value use of Google Adwords than we can for this reason, including e.g. effective altruism meta-charities or animal welfare groups.

Paying $16 for each newsletter subscriber is pretty bad, and it’s not remotely how we would spend $1350/day if it was unrestricted money. But, Adwords creates some value on the margin and we’re glad Google includes us in the program and that we’re able to reach new people about our work by using it. It’s 1000 new people getting our newsletter every year, and more eyeballs on our content. Some of them might pass it to someone else who would be good for a workshop, or something.

Also a word of warning to other nonprofits: we tried many times to get interns or volunteers to improve our Adwords account, but nobody was good at it. Lots of remote supporters (some of whom swore they were amazing at AdWords) were given access and didn’t make a single change to a single campaign, much less create new experiments and find improvements. We also briefly tried paying a contractor to make new ads and they at least tried things, but their ads didn’t generate value so we had to let them go.

If charities work with volunteers or supporters to improve their Adwords accounts, I’d recommend requiring that volunteers produce proof that they are currently managing at least one other large Adwords account successfully (or only ask volunteers for cheap things like ideas and don’t expect any help from them doing the much more costly and difficult work of actually implementing their ideas).

Careers at MIRI

 |   |  News

We’ve published a new Careers page, which advertises current job openings at MIRI.

As always, we’re seeking math researchers to make progress on Friendly AI theory. If you’re interested, the next step is not to apply for the position directly, but to apply to attend a future MIRI research workshop.

We are also accepting applications for a grants manager, a science writer, and an executive assistant.

Visit our Careers page to apply.

careers

Ronald de Wolf on Quantum Computing

 |   |  Conversations

Ronald de Wolf portraitRonald de Wolf is a senior researcher at CWI and a part-time full professor at the University of Amsterdam. He obtained his PhD there in 2001 with a thesis about quantum computing and communication complexity, advised by Harry Buhrman and Paul Vitanyi. Subsequently he was a postdoc at UC Berkeley. His scientific interests include quantum computing, complexity theory, and learning theory.

He also holds a Master’s degree in philosophy (where his thesis was about Kolmogorov complexity and Occam’s razor), and enjoys classical music and literature.

Luke Muehlhauser: Before we get to quantum computing, let me ask you about philosophy. Among other topics, your MSc thesis discusses the relevance of computational learning theory to philosophical debates about Occam’s razor, which is the principle advocating that “among the theories, hypotheses, or explanations that are consistent with the facts, we are to prefer simpler over more complex ones.”

Though many philosophers and scientists adhere to the principle of Occam’s razor, it is often left ambiguous exactly what is meant by “simpler,” and also why this principle is justified in the first place. But in your thesis you write that “in certain formal settings we can, more or less, prove that certain versions of Occam’s Razor work.”

Philosophers are usually skeptical when I argue for K-complexity versions of Occam’s razor, as you do. For example, USC’s Kenny Easwaran once wrote, “I’ve never actually seen how [a K-complexity based simplicity measure] is supposed to solve anything, given that it always depends on a choice of universal machine.”

How would you reply, given your optimism about justifying Occam’s razor “in certain formal settings”?

Read more »