Anders Sandberg on Space Colonization

 |   |  Conversations

Anders Sandberg works at the Future of Humanity Institute, a part of the Oxford Martin School and the Oxford University philosophy faculty. Anders’ research at the FHI centres on societal and ethical issues surrounding human enhancement, estimating the capabilities and underlying science of future technologies, and issues of global catastrophic risk. In particular he has worked on cognitive enhancement, whole brain emulation and risk model uncertainty. He is senior researcher for the FHI-Amlin Research Collaboration on Systemic Risk of Modelling, a unique industry collaboration investigating how insurance modelling contributes to or can mitigate systemic risks.

Anders has a background in computer science and neuroscience. He obtained his Ph.D in computational neuroscience from Stockholm University, Sweden, for work on neural network modelling of human memory. He is co-founder and writer for the think tank Eudoxa, and a regular participant in international public debates about emerging technology.

Luke Muehlhauser: In your paper with Stuart Armstrong, “Eternity in Six Hours,” you run through a variety of calculations based on known physics, and show that “Given certain technological assumptions, such as improved automation, the task of constructing Dyson spheres, designing replicating probes, and launching them at distant galaxies, become quite feasible. We extensively analyze the dynamics of such a project, including issues of deceleration and collision with particles in space.”

You frame the issue in terms of the Fermi paradox, but I’d like to ask about your paper from the perspective of “How hard would it be for an AGI-empowered, Earth-based civilization to colonize the stars?”

In section 6.3. you comment on the robustness of the result:

In the estimation of the authors, the assumptions on intergalactic dust and on the energy efficiency of the rockets represent the most vulnerable part of the whole design; small changes to these assumptions result in huge increases in energy and material required (though not to a scale unfeasible on cosmic timelines). If large particle dust density were an order of magnitude larger, reaching outside the local group would become problematic without shielding methods.

What about the density of intragalactic dust? Given your technological assumptions, do you think it would be fairly straightforward to colonize most of the Milky Way from Earth?

Read more »

The world’s distribution of computation (initial findings)

 |   |  Analysis

 

What is the world’s current distribution of computation, and what will it be in the future?

This question is relevant to several issues in AGI safety strategy. To name just three examples:

  • If a large government or corporation wanted to quickly and massively upgrade its computing capacity so as to make a push for AGI or WBE, how quickly could it do so?
  • If a government thought that AGI or WBE posed a national security threat or global risk, how much computation could it restrict, how quickly?
  • How much extra computing is “immediately” available to a successful botnet or government, simply by running existing computers near 100% capacity rather than at current capacity?

To investigate these questions, MIRI recently contracted Vipul Naik to gather data on the world’s current distribution of computation, including current trends. This blog post summarizes our initial findings by briefly responding to a few questions. Naik’s complete research notes are available here (22 pages). This work is meant to provide a “quick and dirty” launchpad for future, more thorough research into the topic.

Read more »

Nik Weaver on Paradoxes of Rational Agency

 |   |  Conversations

Nik Weaver portraitNik Weaver is a professor of mathematics at Washington University in St. Louis. He did his graduate work at Harvard and Berkeley and received his Ph.D. in 1994. His main interests are functional analysis, quantization, and the foundations of mathematics. He is best known for his work on independence results in C*-algebra and his role in the recent solution of the Kadison-Singer problem. His most recent book is Forcing for Mathematicians.

Luke Muehlhauser: In Weaver (2013) you discuss some paradoxes of rational agency. Can you explain roughly what these “paradoxes” are, for someone who might not be all that familiar with provability logic?


Nik Weaver: Sure. First of all, these are “paradoxes” in the sense of being highly counterintuitive — they’re not outright contradictions.

They all relate to the basic Löbian difficulty that if you reason within a fixed axiomatic system S, and you know that some statement A is provable within S, you’re generally not able to deduce that A is true. This may be an inference that you and I would be willing to make, but if you try to build it into a formal system then the system becomes inconsistent. So, for a rational agent who reasons within a specified axiomatic system, knowing that a proof exists is not as good as actually having a proof.

This leads to some very frustrating consequences. Let’s say I want to build a spaceship, but first I need to be sure that it’s not going to blow up. I have an idea about how to prove this, but it’s extremely tedious, so I write a program to work out the details of the proof and verify that it’s correct. The good news is that when I run the program, it informs me that it was able to fill in the details successfully. The bad news is that I now know that there is a proof that the ship won’t blow up, but I still don’t know that the ship won’t blow up! I’m going to have to check the proof myself, line by line. It’s a complete waste of time, because I know that the program functions correctly (we can assume I’ve proven this), so I know that the line by line verification is going to check out, but I still have to do it.

You may say that I have been programmed badly. Whoever wrote my source code ought to have allowed me to accept “there is a proof that the ship won’t blow up” as sufficient justification for building the ship. This can be a general rule: for any statement A, let “there exists a proof of A” license all the actions that A licenses. We’re not contradicting Löb’s theorem — we still can’t deduce A from knowing there is a proof of A — but we’re finessing it by stipulating that knowing there’s a proof of A is good enough. But there are still problems. Imagine that I can prove that if the Riemann hypothesis is true, then the ship won’t blow up, and if it’s false then there exists a proof that the ship won’t blow up. Then I’m in a situation where I know that either A is true or there is a proof that A is true, but I don’t know which one. So even with the more liberal licensing condition, I still can’t build my lovely spaceship.
Read more »

MIRI’s May 2014 Workshop

 |   |  News

december-2013-workshop-header-721px

From May 3–11, MIRI will host its 7th Workshop of Logic, Probability, and Reflection. This workshop will focus on decision theory and tiling agents.

The participants — all veterans of past workshops — are:

If you have a strong mathematics background and might like to attend a future workshop, apply today! Even if there are no upcoming workshops that fit your schedule, please still apply, so that we can notify you of other workshops (long before they are announced publicly).

Conversation with Holden Karnofsky about Future-Oriented Philanthropy

 |   |  Conversations

Recently, Eliezer and I had an email conversation with Holden Karnofsky to discuss future-oriented philanthropy, including MIRI. The participants were:

We then edited the email conversation into a streamlined conversation, available here.

See also four previous conversations between MIRI and Holden Karnofsky: on existential risk, on MIRI strategy, on transparent research analyses, and on flow-through effects.

John Baez on Research Tactics

 |   |  Conversations

John Baez portraitJohn Baez is a professor of mathematics at U.C. Riverside. Until recently he worked on higher category theory and quantum gravity. His internet column This Week’s Finds dates back to to 1993 and is sometimes called the world’s first blog. In 2010, concerned about climate change and the future of the planet, he switched to working on more practical topics and started the Azimuth Project, an international collaboration to create a focal point for scientists and engineers interested in saving the planet. His research now focuses on the math of networks and information theory, which should help us understand the complex systems that dominate biology and ecology.

Luke Muehlhauser: In a previous interview, I asked Scott Aaronson which “object-level research tactics” he finds helpful when trying to make progress in theoretical research, and I provided some examples. Do you have any comments on the research tactics that Scott and I listed? Which recommended tactics of your own would you add to the list?


John Baez: What do you mean by “object-level” research tactics? I’ve got dozens of tactics. Some of them are ways to solve problems. But equally important, or maybe more so, are tactics for coming up with problems to solve: problems that are interesting but still easy enough to solve. By “object-level”, do you mean the former?
Read more »

2013 in Review: Friendly AI Research

 |   |  MIRI Strategy

This is the 4th part of my personal and qualitative self-review of MIRI in 2013, in which I review MIRI’s 2013 Friendly AI (FAI) research activities.1

Friendly AI research in 2013

  1. In early 2013, we decided to shift our priorities from research plus public outreach to a more exclusive focus on technical FAI research. This resulted in roughly as much public-facing FAI research in 2013 as in all past years combined.
  2. Also, our workshops succeeded in identifying candidates for hire. We expect to hire two 2013 workshop participants in the first half of 2014.
  3. During 2013, I learned many things about how to create an FAI research institute and FAI research field. In particular…
  4. MIRI needs to attract more experienced workshop participants.
  5. Much FAI research can be done by a broad community, and need not be labeled as FAI research. But, more FAI progress is made when the researchers themselves conceive of the research as FAI research.
  6. Communication style matters a lot.

Read more »


  1. What counts as “Friendly AI research” is, naturally, a matter of debate. For most of this post I’ll assume “Friendly AI research” means “what Yudkowsky thinks of as Friendly AI research,” with the exception of intelligence explosion microeconomics, for reasons given in this post. 

MIRI’s February 2014 Newsletter

 |   |  Newsletters

Machine Intelligence Research Institute

Dear friends, 

See below for news on our new ebook, new research, new job openings, and Google’s new AI ethics board.

Research Updates

News Updates

Other Updates

As always, please don’t hesitate to let us know if you have any questions or comments.

 

Best,
Luke Muehlhauser
Executive Director