Anders Sandberg on Space Colonization

 |   |  Conversations

Anders Sandberg works at the Future of Humanity Institute, a part of the Oxford Martin School and the Oxford University philosophy faculty. Anders’ research at the FHI centres on societal and ethical issues surrounding human enhancement, estimating the capabilities and underlying science of future technologies, and issues of global catastrophic risk. In particular he has worked on cognitive enhancement, whole brain emulation and risk model uncertainty. He is senior researcher for the FHI-Amlin Research Collaboration on Systemic Risk of Modelling, a unique industry collaboration investigating how insurance modelling contributes to or can mitigate systemic risks.

Anders has a background in computer science and neuroscience. He obtained his Ph.D in computational neuroscience from Stockholm University, Sweden, for work on neural network modelling of human memory. He is co-founder and writer for the think tank Eudoxa, and a regular participant in international public debates about emerging technology.

Luke Muehlhauser: In your paper with Stuart Armstrong, “Eternity in Six Hours,” you run through a variety of calculations based on known physics, and show that “Given certain technological assumptions, such as improved automation, the task of constructing Dyson spheres, designing replicating probes, and launching them at distant galaxies, become quite feasible. We extensively analyze the dynamics of such a project, including issues of deceleration and collision with particles in space.”

You frame the issue in terms of the Fermi paradox, but I’d like to ask about your paper from the perspective of “How hard would it be for an AGI-empowered, Earth-based civilization to colonize the stars?”

In section 6.3. you comment on the robustness of the result:

In the estimation of the authors, the assumptions on intergalactic dust and on the energy efficiency of the rockets represent the most vulnerable part of the whole design; small changes to these assumptions result in huge increases in energy and material required (though not to a scale unfeasible on cosmic timelines). If large particle dust density were an order of magnitude larger, reaching outside the local group would become problematic without shielding methods.

What about the density of intragalactic dust? Given your technological assumptions, do you think it would be fairly straightforward to colonize most of the Milky Way from Earth?


Anders Sandberg: I believe it would be straightforward given the technological assumptions we make, but to demonstrate it requires much more exploratory engineering footwork.

Intergalactic dust appears to be relatively thin, which is unsurprising: most of the contents of intergalactic space are primordial. There are contributions from galaxies, but they are relatively minor. Another way of seeing that the dust is thin is the absence of obscuration of remote galaxies: the total area along the line of sight is small.

This is unfortunately not true inside the galactic plane. At least in some directions dust is optically thick. The probability of a photon (or a fast spacecraft) hitting a dust particle along its trajectory is high. The size distribution of course matters: if it was all in very small particles the problem is different from than rarer pebbles. (If the total mass density is ρ kg/m3, and the average radius is r meters, then the number density N scales as ρ/r3 grains per cubic meter, while the surface area scales as r2N = ρ/r – finer dust can obscure more, down to the diffraction limit.) The actual size distribution for larger grains that would be a hazard is unfortunately not experimentally known – there is a fair deal known about smaller grains, but the ones we care about here are hard to observe.

The approach in our paper consists of just sending enough probes to have a good chance of one reaching the target without any collision. This has an effective distance cutoff rather dependent on the density of too large grains (d ≈ 1/σN, where σ is the probe cross section and N is the number density). One might argue that sending a lot of probes could work beyond that, but since the required number grows as exp(distance/d) this soon swamps any local resources. So clearly some shielding is necessary; this reduces N. (Getting a very small σ is also good up until the point where the length of the javelin-shaped probe becomes long enough that it starts to suffer lateral hits)

Now, showing that this shielding is achievable is where things get much more involved. This requires analysing the interaction between relativistic grains and a target. We have done an informal analysis of this (unpublished), and think it is possible to get the shielding up more than enough for interstellar travel if one can build atomically precise structures. This seems to be roughly within the capabilities assumed in the rest of the paper.

Another interesting issue is that in our galaxy most dust exists within the Thin Disk. This is a disk around 3000 lightyear thick (with a scale height of about 400 lightyears), which contain most of the gas and dust (plus star mass). Outside this is the thick disk, which is far clearer. Even interstellar hydrogen gas is a rather nasty proton beam from the perspective of a relativistic traveller, so it might actually be useful to go above the plane of the galaxy. This is a spiral galaxy problem: elliptical galaxies tend to have little dust or gas, so they are much easier to travel through.


Luke: What if we just talk about colonizing the local group or the local supercluster? How sensitive are those outcomes to the density of intergalactic space dust and the energy efficiency of rockets?


Anders: There is a nice trade-off between dangerous dust density and safe travel distance: twice the density of largish dust and your distance is halved. So in our paper we assumed intergalactic dust densities larger than a micron to be 10-35 gm/cm3, giving us a reach of 1,194 megaparsecs for using just a single probe. This is more than enough for the local supercluster (which is just 33 megaparsecs across). So even if dust was 36 times denser than we thought this would be achievable.

Note that in galaxies densities can be up to a few million times higher than outside. So there we should expect the distance of travel to be merely a thousand parsecs (if, as seems likely, there is also more grains in the interstellar medium per unit of density distances get even shorter – at least when flying through the dirtiest parts).

Going slower reduces the impact energy, and hence makes the dangerous dust density go down. This effect is however mainly felt for low velocity (<0.1 c) probes: as soon as you start going an appreciable clip of lightspeed you become sensitive to even rather small grains. However, for local colonization if you are not competing with anybody or trying to outrun cosmological expansion, a leisurely pace might be totally acceptable. The energy demands of course decrease a lot too: for 99% c travel you need to supply energy corresponding to 7 times the rest mass of the probe, and even 0.5 c requires 15% of the rest mass energy. 0.1 c is down to just 0.5%.

Some depends on how good rockets can be made. The rocket equation is my least favourite physics equation: it states that to get a velocity change you need to use an exponential amount of reaction mass (and it gets even worse for relativistic rockets). This is why our paper assumed external launchers to send off the probes: using rockets for launch too would have increased the energy and mass demands enormously. The total energy requirement difference between relatively inefficient fission rockets and efficient antimatter rockets is however less than an order of magnitude, since the efficiency boost is eaten up by a longer range and hence the need to send more probes.

Since the paper was written we have tinkered with laser-pushed rocket ideas: if one section of the probe continues ahead, sending back a very intense laser beam to slow the payload, this can be made more effective. There are some very interesting optical possibilities here if designs can be made atomically precise that would make slowing quite energy efficient. But in the end, I don’t think these considerations will provide many orders of magnitude difference.


Luke: As explained by your paper, your plan assumes the probes would be launched by “fixed launch systems” such as coilguns, quenchguns, laser propulsion, and particle beam propulsion, which are vastly more efficient than rockets. (Rockets are for deceleration near the end of a probe’s trip.) Could you give a brief description of these fixed launch systems? Which ones are you most confident will be technologically feasible?


Anders: Most people have heard about railguns, devices that uses the Lorenz force law to electromagnetically accelerate projectiles. These are based on an armature sliding along two current-bearing rails, pushed forward by the magnetic field through the loop of rails and armature. While able to generate tremendous acceleration they also tend to be *messy*, since the armature experiences friction with the rails, generating plasma discharges and erosion. Nevertheless, such systems exist today and can generate a few km/s velocity launches. While one can imagine scaling things up and use optimal materials, for relativistic launches railguns are likely entirely out.

In our paper we assumed some form of coil-guns: the projectile is never touching the launcher, and is accelerated by electromagnetic coils along the launch path. This has many practical benefits, like only needing currents for one or two coils at a time. The main limitations are switching speeds (which we assume can be handled by advanced technology, especially since the motion is very predictable), magnetic saturation of the projectile (which limits the acceleration per coil, but can be handled by using many more coils) and resistance in the coils. Using superconductors for the coils seems entirely reasonable for a large scale project with big resources. That also allows “quenchguns”, where energy is first stored in currents in superconducting coils that are then quenched, producing a very sharp field gradient that could propel the payload. According to the literature and experts we talked with these can in principle be made very efficient, converting most of the electrical energy into kinetic energy. They are based on well understood physics and can be seen as a modest extrapolation of current engineering to large scales.

We have not investigated particle beam propulsion, but it clearly has some potential. In a way a particle beam is a coilgun-accelerated stream of very small projectiles, and part of their momentum can be gathered by a projectile either by absorbing them (leading to heating and damage issues), or by using a mini-accelerator to “grip” them. There is not as much research on this as for electromagnetic propulsion or laser propulsion, but the physics seems sound. Whether it can be made efficient enough likely depends on whether the high efficiency of the particle accelerator can balance the inefficiencies of capturing their momentum at the receiving end. This is worth investigating further.

Laser propulsion has been experimentally demonstrated on small scales, and again is fairly well understood from a exploratory engineering standpoint. The main limitation on acceleration is heating of the launched object: ideally it would just reflect the beam back, but imperfect reflection means that the back part will heat up. Atomically precise manufacturing appears to allow surprisingly beam-resistant materials, even when based on current materials. Another problem for relativistic projectiles is that the beam will become redshifted as it speeds up, and that a normal beam will spread out as it gets further away from the source, reducing efficiency. It turns out these problems can to some extent be fixed by some more radical optics: it looks like it is physically possible to create free-floating “optical fibres” (actually a long line of very thin lenses) that allow the beam intensity to remain high even over significant distances, adapting as the effective wavelength changes. These ideas were developed more fully after the manuscript was finished, so we do not make use of them in it. But they suggest (if they work out when fully written up and simulated) that laser-launching (and laser-powered slowing) might be the most effective way for an advanced civilization to deliver relativistic payloads.


Luke: Now let’s talk about deceleration, where rockets may be needed. But before we get to rockets, let’s talk about more exotic solutions for deceleration: magnetic sails and Bussard ramjets. How might those work?


Anders: The Bussard ramjet is a classic idea for powering an interstellar starship: instead of lugging around a lot of reaction mass (the reason the rocket equation is so painful), why not gather interstellar gas and burn it in a fusion reactor? Unfortunately this requires a pretty high speed in order to get an appreciable mass inflow, and constructing a ramscoop that sucks up enough mass is daunting. It would probably have to be a vast electromagnetic funnel, requiring significant power to generate (and then some more power to compress the input for fusion). Most calculations in the literature end up with the depressing conclusion that it would not be feasible unless the scoop and fusion reaction were very effective, and the exhaust speed very high.

However, one might of course turn the thing around and instead use the funnel to slow the ship against the interstellar medium. Dana Andrews and Robert Zubrin suggested using a magnetic sail, where the field from a superconducting loop deflects charged particles and hence experiences drag. This might be quite effective for slowing relativistic probes; they get a result that implies that a 99% c probe could be slowed to typical interplanetary speeds in about two centuries.

For slowing in intergalactic space where gas is thin ramjets or magnetic sails are of course not very effective until the probe has come very close to the destination (however, if it takes centuries to brake, the braking distance is still merely a few hundred light-years). The lack of reaction mass is very enticing. In our paper we did not explore these possibilities since we wanted a rather conservative estimate, but it looks likely that magnetic sails are worth a serious look in further modelling.


Luke: Why do you hate the relativistic rocket equation? How did you come up with your estimates for the energy efficiency of future rocket designs that could be used for deceleration?


Anders: The rocket equation states that the amount of reaction mass you need to expel grows exponentially with the desired ending velocity – this is why standard rockets are mostly fuel and the payload small. You can get better performance by expelling the reaction mass faster, up to the limit of light-speed (which would require a rocket that expels photons).

The relativistic rocket equation is even worse: as you speed up a lot of the energy turns into mass of the rocket rather than velocity. No amount of energy will get you beyond lightspeed, and as you approach it the acceleration decreases (as seen by an outside observer; somebody onboard the rocket will not notice anything). So the overall energy efficiency becomes worse and worse; unless you really want to get to your destination fast it might not be worth it.

The energy estimates were taken from the literature; see our citations.

This is one area where engineering realities might make designs less efficient, and it does have a big effect: a factor of 50% lower efficiency will square the initial mass to payload ratio. Ouch. Still, given the enormous amount of energy, mass and time available the end result is not too different: very large number of probes can be sent by the resources of a single solar system over astronomically insignificant timescales. It is practically irrelevant if it takes 15 years instead of 6 hours to launch the probes. Inefficiencies however start to bite more sharply the larger the replicator is: in the 500 ton case a 45% less efficient fusion reactor means a complete launch of 1.52 billion probes takes a billion years. So the lesson is clearly that boosting efficiency and/or producing smaller payloads have a big effect. If someone were to argue against our analysis they should likely focus here, looking for fundamental limits of efficiency or credible payload sizes. However, as discussed above, there might be alternatives like magnetic sails that vastly improve efficiency too.


Luke: Which few “next analyses” would be most desirable, from your perspective, for helping to answer the question of how likely it is that an AGI-empowered Earth-based civilization could colonize the galaxy, the supercluster, or the observable universe?


Anders: Much of the work can be done using exploratory engineering: using known physics, can we make designs that perform well enough to fit into the overall mission profile? They do not have to be optimal to prove that it could be done by an advanced civilisation (indeed, when thinking about the Dyson shell material requirements we considered a “steampunk” approach consisting of iron mirrors focusing sunlight on boilers with water or gas-filled turbines in order to check that even if Mercury or similar materials sources lacked elements for big photovoltaics large-scale energy production was still possible). The more detail, the more convincing, but effort should mainly be directed at the big bottlenecks of efficiency or where our understanding of engineering is a bit weak. (More stars = higher priority.)

★ Obviously better designs for self-replicating installations with materials closure in generic space environments are nice: the classic NASA lunar factory study is ancient by now. This also helps understand constraints on how quickly industrialisation of a system could be done, whether there are large first-mover advantages in the solar system that might pose arms race temptations, and some firmer numbers of seed sizes and replication times.

★ Understanding the AI needs for space mining and engineering would be useful. Full AGI is presumably by assumption able to run the infrastructure we require, but even lower order intelligence might be adequate: many animals construct elaborate constructions adaptively or navigate in complex environments without much general intelligence. This also helps estimate the importance of AI and AGI for human space industry.

★★ Relativistic mass drivers look feasible but cumbersome. We think investigating the limits and possibilities of laser-powered launching might be more productive.

★★★ We need a better understanding of the large particle distribution in the interstellar and intergalactic medium. Much of probe design hinges on whether to attempt a shielded probe or just send a large number of expendable probes.

★★ Slowing down using magnetic sails or lasers appear promising, but requires a good power source that can last for a very long time for intergalactic trips, presumably induced fission, fusion or antimatter. How do long-term storable energy sources scale? What are their minimum mass?

★ Automated navigation and colonization requires automated astronomy. We assumed the probes knew where they were going, but it would be interesting to analyse automated colonisation planning. Large orbiting infrastructures can likely act as very powerful telescopes.

★ From a Fermi question perspective it would be useful to investigate more the properties of spread in the younger universe: is the shorter distances outweighed by higher densities?

★★ From a strategic perspective it would be useful to investigate whether this type of replicating probes can be successfully prevented from invading an occupied galaxy or not. Much of long-term strategy may depend on whether invaders or defenders have the advantage, including whether very large singleton systems are possible or whether significant amount of resources have to be spent on defence.


Luke: Thanks, Anders!