New Paper: “Embryo Selection for Cognitive Enhancement”

 |   |  Papers

IES first pageDuring his time as a MIRI research fellow, Carl Shulman co-authored (with Nick Bostrom) a paper that is now available as a preprint, titled “Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer?


Human capital is a key determinant of personal and national outcomes, and a major input to scientific progress. It has been suggested that advances in genomics will make it possible to enhance human intellectual abilities. One way to do this would be via embryo selection in the context of in vitro fertilization (IVF). In this article, we analyze the feasibility, timescale, and possible societal impacts of embryo selection for cognitive enhancement. We find that embryo selection, on its own, may have significant impacts, but likely not drastic ones, over the next 50 years, though large effects could accumulate over multiple generations. However, there is a complementary technology, stem cell-derived gametes, which has been making rapid progress and which could amplify the impact of embryo selection, enabling very large changes if successfully applied to humans.

The last sentence refers to “iterated embryo selection” (IES), a future technology first described by MIRI in 2009. This technology has significant strategic relevance for Friendly AI (FAI) development because it might be the only intelligence amplification (IA) technology (besides WBE) to have large enough effects on human intelligence to substantially shift our odds of getting FAI before arbitrary AGI, if AGI is developed sometime this century.

Unfortunately, it remains unclear whether the arrival of IES would shift our FAI chances positively or negatively. On the one hand, a substantially smarter humanity may be wiser, and more likely to get FAI right. On the other hand, IES might accelerate AGI relative to FAI, since AGI is more parallelizable than FAI. (For more detail, and more arguments pointing in both directions, see Intelligence Amplification and Friendly AI.)

Markus Schmidt on Risks from Novel Biotechnologies

 |   |  Conversations

Markus Schmidt portrait Dr. Markus Schmidt is founder and team leader of Biofaction, a research and science communication company in Vienna, Austria. With an educational background in electronic engineering, biology and environmental risk assessment he has carried out environmental risk assessment and safety and public perception studies in a number of science and technology fields (GM-crops, gene therapy, nanotechnology, converging technologies, and synthetic biology) for more than 10 years.

He was/is coordinator/partner in several national and European research projects, for example SYNBIOSAFE, the first European project on safety and ethics of synthetic biology (2007-2008), COSY on communicating synthetic biology (2008-2009), TARPOL on industrial and environmental applications of synthetic biology (2008-2010), CISYNBIO on the depiction of synthetic biology in movies (2009-2012), a joint Sino-Austrian project on synthetic biology and risk assessment (2009-2012), or ST-FLOW on standardization for robust bioengineering of new-to-nature biological properties (2011-2015).

He produced science policy reports for the Office of Technology Assessment at the German Bundestag (on GM-crops in China), and the Austrian Ministry of Transport, Innovation and Technology (nanotechnology and converging technologies). He served as an advisor to the European Group on Ethics (EGE) of the European Commission, the US Presidential Commission for the Study of Bioethical Issues, the J Craig Venter Institute, the Alfred P. Sloan Foundation, and Bioethics Council of the German Parliament as well as to several thematically related international projects. Markus Schmidt is the author of several peer-reviewed articles, he edited a special issue and two books about synthetic biology and its societal ramifications, and produced the first documentary film about synthetic biology.

In addition to the scientific work, he organized a Science Film Festival and produced an art exhibition (both 2011) to explore novel and creative ideas and interpretations on the future of biotechnology.

Luke Muehlhauser: I’ll start by giving our readers a quick overview of synthetic biology, the “design and construction of biological devices and systems for useful purposes.” As explained in a 2012 book you edited, major applications of synthetic biology include:

  • Biofuels: ethanol, algae-based fuels, bio-hydrogen, microbial fuel cells, etc.
  • Bioremediation: wastewater treatment, water desalination, solid waste decomposition, COrecapturing, etc.
  • Biomaterials: bioplastics, bulk chemicals, cellulosomes, etc.
  • Novel developments: protocells and xenobiology for the production of novel cells and organisms.

But in addition to promoting the useful applications of synthetic biology, you also speak and write extensively about the potential risks of synthetic biology. Which risks from novel biotechnologies are you most concerned about?

Read more »

Bas Steunebrink on Self-Reflective Programming

 |   |  Conversations

Bas Steunebrink portraitBas Steunebrink is a postdoctoral researcher at the Swiss AI lab IDSIA, as part of Prof. Schmidhuber’s group. He received his PhD in 2010 from Utrecht University, the Netherlands. Bas’s dissertation was on the subject of artificial emotions, which fits well in his continuing quest of finding practical and creative ways in which general intelligent agents can deal with time and resource constraints. A recent paper on how such agents will naturally strive to be effective, efficient, and curious was awarded the Kurzweil Prize for Best AGI Idea at AGI’2013. Bas also has a great interest in anything related to self-reflection and meta-learning, and all “meta” stuff in general.

Luke Muehlhauser: One of your ongoing projects has been a Gödel machine (GM) implementation. Could you please explain (1) what a Gödel machine is, (2) why you’re motivated to work on that project, and (3) what your implementation of it does?

Bas Steunebrink: A GM is a program consisting of two parts running in parallel; let’s name them Solver and Searcher. Solver can be any routine that does something useful, such as solving task after task in some environment. Searcher is a routine that tries to find beneficial modifications to both Solver and Searcher, i.e., to any part of the GM’s software. So Searcher can inspect and modify any part of the Gödel Machine. The trick is that the initial setup of Searcher only allows Searcher to make such a self-modification if it has a proof that performing this self-modification is beneficial in the long run, according to an initially provided utility function. Since Solver and Searcher are running in parallel, you could say that a third component is necessary: a Scheduler. Of course Searcher also has read and write access to the Scheduler’s code.

Godel Machine: diagram of scheduler

Read more »

Probabilistic Metamathematics and the Definability of Truth

 |   |  News, Video

On October 15th, Paul Christiano presented “Probabilistic metamathematics and the definability of truth” at Harvard University as part of Logic at Harvard (details here). As explained here, Christiano came up with the idea for this approach, and it was developed further at a series of MIRI research workshops.

Video of the talk is now available:

The video is occasionally blurry due to camera problems, but is still clear enough to watch.

Hadi Esmaeilzadeh on Dark Silicon

 |   |  Conversations

Hadi Esmaeilzadeh recently joined the School of Computer Science at the Georgia Institute of Technology as assistant professor. He is the first holder of the Catherine M. and James E. Allchin Early Career Professorship. Hadi directs the Alternative Computing Technologies (ACT) Lab, where he and his students are working on developing new technologies and cross-stack solutions to improve the performance and energy efficiency of computer systems for emerging applications. Hadi received his Ph.D. from the Department of Computer Science and Engineering at University of Washington. He has a Master’s degree in Computer Science from The University of Texas at Austin, and a Master’s degree in Electrical and Computer Engineering from University of Tehran. Hadi’s research has been recognized by three Communications of the ACM Research Highlights and three IEEE Micro Top Picks. Hadi’s work on dark silicon has also been profiled in New York Times.

Luke Muehlhauser: Could you please explain for our readers what “dark silicon” is, and why it poses a threat to the historical exponential trend in computing performance growth?

Hadi Esmaeilzadeh: I would like to answer your question with a question. What is the difference between the computing industry and the commodity industries like the paper towel industry?

The main difference is that computing industry is an industry of new possibilities while the paper towel industry is an industry of replacement. You buy paper towels because you run out of them; but you buy new computing products because they get better.

And, it is not just the computers that are improving; it is the offered services and experiences that consistently improve. Can you even imagine running out of Microsoft Windows?

One of the primary drivers of this economic model is the exponential reduction in the cost of performing general-purpose computing. While in 1971, at the dawn of microprocessors, the price of 1 MIPS (Million Instruction Per Second) was roughly $5,000, it today is about 4¢. This is an exponential reduction in the cost of raw material for computing. This continuous and exponential reduction in cost has formed the basis of computing industry’s economy in the past four decades.

Read more »

Russell and Norvig on Friendly AI

 |   |  Analysis

russell-norvigAI: A Modern Approach is by far the dominant textbook in the field. It is used in 1200 universities, and is currently the 22nd most-cited publication in computer science. Its authors, Stuart Russell and Peter Norvig, devote significant space to AI dangers and Friendly AI in section 26.3, “The Ethics and Risks of Developing Artificial Intelligence.”

The first 5 risks they discuss are:

  • People might lose their jobs to automation.
  • People might have too much (or too little) leisure time.
  • People might lose their sense of being unique.
  • AI systems might be used toward undesirable ends.
  • The use of AI systems might result in a loss of accountability.

Each of those sections is one or two paragraphs long. The final subsection, “The Success of AI might mean the end of the human race,” is given 3.5 pages. Here’s a snippet:

The question is whether an AI system poses a bigger risk than traditional software. We will look at three sources of risk. First, the AI system’s state estimation may be incorrect, causing it to do the wrong thing. For example… a missile defense system might erroneously detect an attack and launch a counterattack, leading to the death of billions…

Second, specifying the right utility function for an AI system to maximize is not so easy. For example, we might propose a utility function designed to minimize human suffering, expressed as an additive reward function over time… Given the way humans are, however, we’ll always find a way to suffer even in paradise; so the optimal decision for the AI system is to terminate the human race as soon as possible – no humans, no suffering…

Third, the AI system’s learning function may cause it to evolve into a system with unintended behavior. This scenario is the most serious, and is unique to AI systems, so we will cover it in more depth. I.J. Good wrote (1965),

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then be unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

Read more »

Richard Posner on AI Dangers

 |   |  Analysis

PosnerRichard Posner is a jurist, legal theorist, and economist. He is also the author of nearly 40 books, and is by far the most-cited legal scholar of the 20th century.

In 2004, Posner published Catastrophe: Risk and Response, in which he discusses risks from AGI at some length. His analysis is interesting in part because it appears to be intellectually independent from the Bostrom-Yudkowsky tradition that dominates the topic today.

In fact, Posner does not appear to be aware of earlier work on the topic by I.J. Good (19701982), Ed Fredkin (1979), Roger Clarke (1993, 1994), Daniel Weld & Oren Etzioni (1994), James Gips (1995), Blay Whitby (1996), Diana Gordon (2000), Chris Harper (2000), or Colin Allen (2000). He is not even aware of Hans Moravec (1990, 1999), Bill Joy (2000), Nick Bostrom (1997; 2003), or Eliezer Yudkowsky (2001). Basically, he seems to know only of Ray Kurzweil (1999).

Still, much of Posner’s analysis is consistent with the basic points of the Bostrom-Yudkowsky tradition:

[One class of catastrophic risks] consists of… scientific accidents, for example accidents involving particle accelerators, nanotechnology…, and artificial intelligence. Technology is the cause of these risks, and slowing down technology may therefore be the right response.

…there may some day, perhaps some day soon (decades, not centuries, hence), be robots with human and [soon thereafter] more than human intelligence…

…Human beings may turn out to be the twenty-first century’s chimpanzees, and if so the robots may have as little use and regard for us as we do for our fellow, but nonhuman, primates…

…A robot’s potential destructiveness does not depend on its being conscious or able to engage in [e.g. emotional processing]… Unless carefully programmed, the robots might prove indiscriminately destructive and turn on their creators.

…Kurzweil is probably correct that “once a computer achieves a human level of intelligence, it will necessarily roar past it”…

One major point of divergence seems to be that Posner worries about a scenario in which AGIs become self-aware, re-evaluate their goals, and decide not to be “bossed around by a dumber species” anymore. In contrast, Bostrom and Yudkowsky think AGIs will be dangerous not because they will “rebel” against humans, but because (roughly) using all available resources — including those on which human life depends — is a convergent instrumental goal for almost any set of final goals a powerful AGI might possess. (See e.g. Bostrom 2012.)

Ben Goertzel on AGI as a Field

 |   |  Conversations

Ben Goertzel portrait Dr. Ben Goertzel is Chief Scientist of financial prediction firm Aidyia Holdings; Chairman of AI software company Novamente LLC and bioinformatics company Biomind LLC; Chairman of the Artificial General Intelligence Society and the OpenCog Foundation; Vice Chairman of futurist nonprofit Humanity+; Scientific Advisor of biopharma firm Genescient Corp.; Advisor to the Singularity University and MIRI; Research Professor in the Fujian Key Lab for Brain-Like Intelligent Systems at Xiamen University, China; and general Chair of the Artificial General Intelligence conference series. His research work encompasses artificial general intelligence, natural language processing, cognitive science, data mining, machine learning, computational finance, bioinformatics, virtual worlds and gaming and other areas. He has published a dozen scientific books, 100+ technical papers, and numerous journalistic articles. Before entering the software industry he served as a university faculty in several departments of mathematics, computer science and cognitive science, in the US, Australia and New Zealand. He has three children and too many pets, and in his spare time enjoys creating avant-garde fiction and music, and exploring the outdoors.

Read more »