Singularity Hypotheses: A Scientific and Philosophical Assessment has now been published by Springer, in hardcover and ebook forms.
The book contains 20 chapters about the prospect of machine superintelligence, including 4 chapters by MIRI researchers and research associates.
“Intelligence Explosion: Evidence and Import” (pdf) by Luke Muehlhauser and (previous MIRI researcher) Anna Salamon reviews
the evidence for and against three claims: that (1) there is a substantial chance we will create human-level AI before 2100, that (2) if human-level AI is created, there is a good chance vastly superhuman AI will follow via an “intelligence explosion,” and that (3) an uncontrolled intelligence explosion could destroy everything we value, but a controlled intelligence explosion would benefit humanity enormously if we can achieve it. We conclude with recommendations for increasing the odds of a controlled intelligence explosion relative to an uncontrolled intelligence explosion.
“Intelligence Explosion and Machine Ethics” (pdf) by Luke Muehlhauser and Louie Helm discusses the challenges of formal value systems for use in AI:
Many researchers have argued that a self-improving artificial intelligence (AI) could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals. If so, and if the AI’s goals diﬀer from ours, then this could be disastrous for humans. One proposed solution is to program the AI’s goal system to want what we want before the AI self-improves beyond our capacity to control it. Unfortunately, it is diﬃcult to specify what we want. After clarifying what we mean by “intelligence,” we oﬀer a series of “intuition pumps” from the field of moral philosophy for our conclusion that human values are complex and diﬃcult to specify. We then survey the evidence from the psychology of motivation, moral psychology, and neuroeconomics that supports our position. We conclude by recommending ideal preference theories of value as a promising approach for developing a machine ethics suitable for navigating an intelligence explosion or “technological singularity.”
“Friendly Artificial Intelligence” by Eliezer Yudkowsky is a shortened version of Yudkowsky (2008).
Finally, “Artificial General Intelligence and the Human Mental Model” (pdf) by Roman Yampolskiy and (MIRI research associate) Joshua Fox reviews the dangers of anthropomorphizing machine intelligences:
When the first artificial general intelligences are built, they may improve themselves to far-above-human levels. Speculations about such future entities are already aﬀected by anthropomorphic bias, which leads to erroneous analogies with human minds. In this chapter, we apply a goal-oriented understanding of intelligence to show that humanity occupies only a tiny portion of the design space of possible minds. This space is much larger than what we are familiar with from the human example; and the mental architectures and goals of future superintelligences need not have most of the properties of human minds. A new approach to cognitive science and philosophy of mind, one not centered on the human example, is needed to help us understand the challenges which we will face when a power greater than us emerges.
The book also includes brief, critical responses to most chapters, including responses written by Eliezer Yudkowsky and (previous MIRI staffer) Michael Anissimov.