MIRI’s November 2013 Newsletter
|
||
|
||
|
|
||
|
||
|
If you shop at the new AmazonSmile, Amazon donates 0.5% of the price of your eligible purchases to a charitable organization of your choosing.
MIRI is an eligible charitable organization, so the next time you consider purchasing something through Amazon, support MIRI by shopping at AmazonSmile!
If you get to Amazon.com via an affiliate link, remember to change “amazon.com” to “smile.amazon.com” in the address bar before making your purchase. Or, even easier, use the SmileAlways Chrome extension.
Greg Morrisett is the Allen B. Cutting Professor of Computer Science at Harvard University. He received his B.S. in Mathematics and Computer Science from the University of Richmond in 1989, and his Ph.D. from Carnegie Mellon in 1995. In 1996, he took a position at Cornell University, and in the 2003-04 academic year, he took a sabbatical and visited the Microsoft European Research Laboratory. In 2004, he moved to Harvard, where he has served as Associate Dean for Computer Science and Engineering, and where he currently heads the Harvard Center for Research on Computation and Society.
Morrisett has received a number of awards for his research on programming languages, type systems, and software security, including a Presidential Early Career Award for Scientists and Engineers, an IBM Faculty Fellowship, an NSF Career Award, and an Alfred P. Sloan Fellowship.
He served as Chief Editor for the Journal of Functional Programming and as an associate editor for ACM Transactions on Programming Languages and Systems and Information Processing Letters. He currently serves on the editorial board for The Journal of the ACM and as co-editor-in-chief for the Research Highlights column of Communications of the ACM. In addition, Morrisett has served on the DARPA Information Science and Technology Study (ISAT) Group, the NSF Computer and Information Science and Engineering (CISE) Advisory Council, Microsoft Research’s Technical Advisory Board, and Microsoft’s Trusthworthy Computing Academic Advisory Board.
Luke Muehlhauser: One of the interesting projects in which you’re involved is SAFE, a DARPA-funded project “focused on a clean slate design for resilient and secure systems.” What is the motivation for this project, and in particular for its “clean slate” approach?
For centuries, philosophers wondered how we could learn what causes what. Some argued it was impossible, or possible only via experiment. Others kept hacking away at the problem, clarifying ideas like counterfactual and probability and correlation by making them more precise and coherent.
Then, in the 1990s, a breakthrough: Judea Pearl and others showed that, in principle, we can sometimes infer causal relations from data even without experiment, via the mathematical machinery of probabilistic graphical models.
Next, engineers used this mathematical insight to write software that can, in seconds, infer causal relations from a data set of observations.
Across the centuries, researchers had toiled away, pushing our understanding of causality from philosophy to math to engineering.
And so it is with Friendly AI research. Current progress on each sub-problem of Friendly AI lies somewhere on a spectrum from philosophy to math to engineering.
Robin Hanson is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He is known as a founder of the field of prediction markets, and was a chief architect of the Foresight Exchange, DARPA’s FutureMAP, IARPA’s DAGGRE, SciCast, and is chief scientist at Consensus Point. He started the first internal corporate prediction market at Xanadu in 1990, and invented the widely used market scoring rules. He also studies signaling and information aggregation, and is writing a book on the social implications of brain emulations. He blogs at Overcoming Bias.
Hanson received a B.S. in physics from the University of California, Irvine in 1981, an M.S. in physics and an M.A. in Conceptual Foundations of Science from the University of Chicago in 1984, and a Ph.D. in social science from Caltech in 1997. Before getting his Ph.D he researched artificial intelligence, Bayesian statistics and hypertext publishing at Lockheed, NASA and elsewhere.
Luke Muehlhauser: In an earlier blog post, I wrote about the need for what I called AGI impact experts who “develop skills related to predicting technological development, predicting AGI’s likely impact on society, and identifying which interventions are most likely to increase humanity’s chances of safely navigating the creation of AGI.”
In 2009, you gave a talk called, “How does society identify experts and when does it work?” Given the study that you’ve done and the expertise, what do you think of humanity’s prospects for developing these AGI impact experts? If they are developed, do you think society will be able to recognize who is an expert and who is not?
During his time as a MIRI research fellow, Carl Shulman co-authored (with Nick Bostrom) a paper that is now available as a preprint, titled “Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer?”
Abstract:
Human capital is a key determinant of personal and national outcomes, and a major input to scientific progress. It has been suggested that advances in genomics will make it possible to enhance human intellectual abilities. One way to do this would be via embryo selection in the context of in vitro fertilization (IVF). In this article, we analyze the feasibility, timescale, and possible societal impacts of embryo selection for cognitive enhancement. We find that embryo selection, on its own, may have significant impacts, but likely not drastic ones, over the next 50 years, though large effects could accumulate over multiple generations. However, there is a complementary technology, stem cell-derived gametes, which has been making rapid progress and which could amplify the impact of embryo selection, enabling very large changes if successfully applied to humans.
The last sentence refers to “iterated embryo selection” (IES), a future technology first described by MIRI in 2009. This technology has significant strategic relevance for Friendly AI (FAI) development because it might be the only intelligence amplification (IA) technology (besides WBE) to have large enough effects on human intelligence to substantially shift our odds of getting FAI before arbitrary AGI, if AGI is developed sometime this century.
Unfortunately, it remains unclear whether the arrival of IES would shift our FAI chances positively or negatively. On the one hand, a substantially smarter humanity may be wiser, and more likely to get FAI right. On the other hand, IES might accelerate AGI relative to FAI, since AGI is more parallelizable than FAI. (For more detail, and more arguments pointing in both directions, see Intelligence Amplification and Friendly AI.)
Dr. Markus Schmidt is founder and team leader of Biofaction, a research and science communication company in Vienna, Austria. With an educational background in electronic engineering, biology and environmental risk assessment he has carried out environmental risk assessment and safety and public perception studies in a number of science and technology fields (GM-crops, gene therapy, nanotechnology, converging technologies, and synthetic biology) for more than 10 years.
He was/is coordinator/partner in several national and European research projects, for example SYNBIOSAFE, the first European project on safety and ethics of synthetic biology (2007-2008), COSY on communicating synthetic biology (2008-2009), TARPOL on industrial and environmental applications of synthetic biology (2008-2010), CISYNBIO on the depiction of synthetic biology in movies (2009-2012), a joint Sino-Austrian project on synthetic biology and risk assessment (2009-2012), or ST-FLOW on standardization for robust bioengineering of new-to-nature biological properties (2011-2015).
He produced science policy reports for the Office of Technology Assessment at the German Bundestag (on GM-crops in China), and the Austrian Ministry of Transport, Innovation and Technology (nanotechnology and converging technologies). He served as an advisor to the European Group on Ethics (EGE) of the European Commission, the US Presidential Commission for the Study of Bioethical Issues, the J Craig Venter Institute, the Alfred P. Sloan Foundation, and Bioethics Council of the German Parliament as well as to several thematically related international projects. Markus Schmidt is the author of several peer-reviewed articles, he edited a special issue and two books about synthetic biology and its societal ramifications, and produced the first documentary film about synthetic biology.
In addition to the scientific work, he organized a Science Film Festival and produced an art exhibition (both 2011) to explore novel and creative ideas and interpretations on the future of biotechnology.
Luke Muehlhauser: I’ll start by giving our readers a quick overview of synthetic biology, the “design and construction of biological devices and systems for useful purposes.” As explained in a 2012 book you edited, major applications of synthetic biology include:
But in addition to promoting the useful applications of synthetic biology, you also speak and write extensively about the potential risks of synthetic biology. Which risks from novel biotechnologies are you most concerned about?
Bas Steunebrink is a postdoctoral researcher at the Swiss AI lab IDSIA, as part of Prof. Schmidhuber’s group. He received his PhD in 2010 from Utrecht University, the Netherlands. Bas’s dissertation was on the subject of artificial emotions, which fits well in his continuing quest of finding practical and creative ways in which general intelligent agents can deal with time and resource constraints. A recent paper on how such agents will naturally strive to be effective, efficient, and curious was awarded the Kurzweil Prize for Best AGI Idea at AGI’2013. Bas also has a great interest in anything related to self-reflection and meta-learning, and all “meta” stuff in general.
Luke Muehlhauser: One of your ongoing projects has been a Gödel machine (GM) implementation. Could you please explain (1) what a Gödel machine is, (2) why you’re motivated to work on that project, and (3) what your implementation of it does?
Bas Steunebrink: A GM is a program consisting of two parts running in parallel; let’s name them Solver and Searcher. Solver can be any routine that does something useful, such as solving task after task in some environment. Searcher is a routine that tries to find beneficial modifications to both Solver and Searcher, i.e., to any part of the GM’s software. So Searcher can inspect and modify any part of the Gödel Machine. The trick is that the initial setup of Searcher only allows Searcher to make such a self-modification if it has a proof that performing this self-modification is beneficial in the long run, according to an initially provided utility function. Since Solver and Searcher are running in parallel, you could say that a third component is necessary: a Scheduler. Of course Searcher also has read and write access to the Scheduler’s code.