Greg Morrisett on Secure and Reliable Systems

 |   |  Conversations

Greg Morrisett portrait Greg Morrisett is the Allen B. Cutting Professor of Computer Science at Harvard University. He received his B.S. in Mathematics and Computer Science from the University of Richmond in 1989, and his Ph.D. from Carnegie Mellon in 1995. In 1996, he took a position at Cornell University, and in the 2003-04 academic year, he took a sabbatical and visited the Microsoft European Research Laboratory. In 2004, he moved to Harvard, where he has served as Associate Dean for Computer Science and Engineering, and where he currently heads the Harvard Center for Research on Computation and Society.

Morrisett has received a number of awards for his research on programming languages, type systems, and software security, including a Presidential Early Career Award for Scientists and Engineers, an IBM Faculty Fellowship, an NSF Career Award, and an Alfred P. Sloan Fellowship.

He served as Chief Editor for the Journal of Functional Programming and as an associate editor for ACM Transactions on Programming Languages and Systems and Information Processing Letters. He currently serves on the editorial board for The Journal of the ACM and as co-editor-in-chief for the Research Highlights column of Communications of the ACM. In addition, Morrisett has served on the DARPA Information Science and Technology Study (ISAT) Group, the NSF Computer and Information Science and Engineering (CISE) Advisory Council, Microsoft Research’s Technical Advisory Board, and Microsoft’s Trusthworthy Computing Academic Advisory Board.

Luke Muehlhauser: One of the interesting projects in which you’re involved is SAFE, a DARPA-funded project “focused on a clean slate design for resilient and secure systems.” What is the motivation for this project, and in particular for its “clean slate” approach?

Read more »

From Philosophy to Math to Engineering

 |   |  Analysis

For centuries, philosophers wondered how we could learn what causes what. Some argued it was impossible, or possible only via experiment. Others kept hacking away at the problem, clarifying ideas like counterfactual and probability and correlation by making them more precise and coherent.

Then, in the 1990s, a breakthrough: Judea Pearl and others showed that, in principle, we can sometimes infer causal relations from data even without experiment, via the mathematical machinery of probabilistic graphical models.

Next, engineers used this mathematical insight to write software that can, in seconds, infer causal relations from a data set of observations.

Across the centuries, researchers had toiled away, pushing our understanding of causality from philosophy to math to engineering.

From Philosophy to Math to Engineering (small)

And so it is with Friendly AI research. Current progress on each sub-problem of Friendly AI lies somewhere on a spectrum from philosophy to math to engineering.

Read more »

Robin Hanson on Serious Futurism

 |   |  Conversations

Robin Hanson is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He is known as a founder of the field of prediction markets, and was a chief architect of the Foresight Exchange, DARPA’s FutureMAP, IARPA’s DAGGRE, SciCast, and is chief scientist at Consensus Point. He started the first internal corporate prediction market at Xanadu in 1990, and invented the widely used market scoring rules. He also studies signaling and information aggregation, and is writing a book on the social implications of brain emulations. He blogs at Overcoming Bias.

Hanson received a B.S. in physics from the University of California, Irvine in 1981, an M.S. in physics and an M.A. in Conceptual Foundations of Science from the University of Chicago in 1984, and a Ph.D. in social science from Caltech in 1997. Before getting his Ph.D he researched artificial intelligence, Bayesian statistics and hypertext publishing at Lockheed, NASA and elsewhere.

Luke Muehlhauser: In an earlier blog post, I wrote about the need for what I called AGI impact experts who “develop skills related to predicting technological development, predicting AGI’s likely impact on society, and identifying which interventions are most likely to increase humanity’s chances of safely navigating the creation of AGI.”

In 2009, you gave a talk called, “How does society identify experts and when does it work?” Given the study that you’ve done and the expertise, what do you think of humanity’s prospects for developing these AGI impact experts? If they are developed, do you think society will be able to recognize who is an expert and who is not?

Read more »

New Paper: “Embryo Selection for Cognitive Enhancement”

 |   |  Papers

IES first pageDuring his time as a MIRI research fellow, Carl Shulman co-authored (with Nick Bostrom) a paper that is now available as a preprint, titled “Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer?

Abstract:

Human capital is a key determinant of personal and national outcomes, and a major input to scientific progress. It has been suggested that advances in genomics will make it possible to enhance human intellectual abilities. One way to do this would be via embryo selection in the context of in vitro fertilization (IVF). In this article, we analyze the feasibility, timescale, and possible societal impacts of embryo selection for cognitive enhancement. We find that embryo selection, on its own, may have significant impacts, but likely not drastic ones, over the next 50 years, though large effects could accumulate over multiple generations. However, there is a complementary technology, stem cell-derived gametes, which has been making rapid progress and which could amplify the impact of embryo selection, enabling very large changes if successfully applied to humans.

The last sentence refers to “iterated embryo selection” (IES), a future technology first described by MIRI in 2009. This technology has significant strategic relevance for Friendly AI (FAI) development because it might be the only intelligence amplification (IA) technology (besides WBE) to have large enough effects on human intelligence to substantially shift our odds of getting FAI before arbitrary AGI, if AGI is developed sometime this century.

Unfortunately, it remains unclear whether the arrival of IES would shift our FAI chances positively or negatively. On the one hand, a substantially smarter humanity may be wiser, and more likely to get FAI right. On the other hand, IES might accelerate AGI relative to FAI, since AGI is more parallelizable than FAI. (For more detail, and more arguments pointing in both directions, see Intelligence Amplification and Friendly AI.)

Markus Schmidt on Risks from Novel Biotechnologies

 |   |  Conversations

Markus Schmidt portrait Dr. Markus Schmidt is founder and team leader of Biofaction, a research and science communication company in Vienna, Austria. With an educational background in electronic engineering, biology and environmental risk assessment he has carried out environmental risk assessment and safety and public perception studies in a number of science and technology fields (GM-crops, gene therapy, nanotechnology, converging technologies, and synthetic biology) for more than 10 years.

He was/is coordinator/partner in several national and European research projects, for example SYNBIOSAFE, the first European project on safety and ethics of synthetic biology (2007-2008), COSY on communicating synthetic biology (2008-2009), TARPOL on industrial and environmental applications of synthetic biology (2008-2010), CISYNBIO on the depiction of synthetic biology in movies (2009-2012), a joint Sino-Austrian project on synthetic biology and risk assessment (2009-2012), or ST-FLOW on standardization for robust bioengineering of new-to-nature biological properties (2011-2015).

He produced science policy reports for the Office of Technology Assessment at the German Bundestag (on GM-crops in China), and the Austrian Ministry of Transport, Innovation and Technology (nanotechnology and converging technologies). He served as an advisor to the European Group on Ethics (EGE) of the European Commission, the US Presidential Commission for the Study of Bioethical Issues, the J Craig Venter Institute, the Alfred P. Sloan Foundation, and Bioethics Council of the German Parliament as well as to several thematically related international projects. Markus Schmidt is the author of several peer-reviewed articles, he edited a special issue and two books about synthetic biology and its societal ramifications, and produced the first documentary film about synthetic biology.

In addition to the scientific work, he organized a Science Film Festival and produced an art exhibition (both 2011) to explore novel and creative ideas and interpretations on the future of biotechnology.

Luke Muehlhauser: I’ll start by giving our readers a quick overview of synthetic biology, the “design and construction of biological devices and systems for useful purposes.” As explained in a 2012 book you edited, major applications of synthetic biology include:

  • Biofuels: ethanol, algae-based fuels, bio-hydrogen, microbial fuel cells, etc.
  • Bioremediation: wastewater treatment, water desalination, solid waste decomposition, COrecapturing, etc.
  • Biomaterials: bioplastics, bulk chemicals, cellulosomes, etc.
  • Novel developments: protocells and xenobiology for the production of novel cells and organisms.

But in addition to promoting the useful applications of synthetic biology, you also speak and write extensively about the potential risks of synthetic biology. Which risks from novel biotechnologies are you most concerned about?

Read more »

Bas Steunebrink on Self-Reflective Programming

 |   |  Conversations

Bas Steunebrink portraitBas Steunebrink is a postdoctoral researcher at the Swiss AI lab IDSIA, as part of Prof. Schmidhuber’s group. He received his PhD in 2010 from Utrecht University, the Netherlands. Bas’s dissertation was on the subject of artificial emotions, which fits well in his continuing quest of finding practical and creative ways in which general intelligent agents can deal with time and resource constraints. A recent paper on how such agents will naturally strive to be effective, efficient, and curious was awarded the Kurzweil Prize for Best AGI Idea at AGI’2013. Bas also has a great interest in anything related to self-reflection and meta-learning, and all “meta” stuff in general.

Luke Muehlhauser: One of your ongoing projects has been a Gödel machine (GM) implementation. Could you please explain (1) what a Gödel machine is, (2) why you’re motivated to work on that project, and (3) what your implementation of it does?


Bas Steunebrink: A GM is a program consisting of two parts running in parallel; let’s name them Solver and Searcher. Solver can be any routine that does something useful, such as solving task after task in some environment. Searcher is a routine that tries to find beneficial modifications to both Solver and Searcher, i.e., to any part of the GM’s software. So Searcher can inspect and modify any part of the Gödel Machine. The trick is that the initial setup of Searcher only allows Searcher to make such a self-modification if it has a proof that performing this self-modification is beneficial in the long run, according to an initially provided utility function. Since Solver and Searcher are running in parallel, you could say that a third component is necessary: a Scheduler. Of course Searcher also has read and write access to the Scheduler’s code.

Godel Machine: diagram of scheduler

Read more »

Probabilistic Metamathematics and the Definability of Truth

 |   |  News, Video

On October 15th, Paul Christiano presented “Probabilistic metamathematics and the definability of truth” at Harvard University as part of Logic at Harvard (details here). As explained here, Christiano came up with the idea for this approach, and it was developed further at a series of MIRI research workshops.

Video of the talk is now available:

The video is occasionally blurry due to camera problems, but is still clear enough to watch.

Hadi Esmaeilzadeh on Dark Silicon

 |   |  Conversations

Hadi Esmaeilzadeh recently joined the School of Computer Science at the Georgia Institute of Technology as assistant professor. He is the first holder of the Catherine M. and James E. Allchin Early Career Professorship. Hadi directs the Alternative Computing Technologies (ACT) Lab, where he and his students are working on developing new technologies and cross-stack solutions to improve the performance and energy efficiency of computer systems for emerging applications. Hadi received his Ph.D. from the Department of Computer Science and Engineering at University of Washington. He has a Master’s degree in Computer Science from The University of Texas at Austin, and a Master’s degree in Electrical and Computer Engineering from University of Tehran. Hadi’s research has been recognized by three Communications of the ACM Research Highlights and three IEEE Micro Top Picks. Hadi’s work on dark silicon has also been profiled in New York Times.

Luke Muehlhauser: Could you please explain for our readers what “dark silicon” is, and why it poses a threat to the historical exponential trend in computing performance growth?


Hadi Esmaeilzadeh: I would like to answer your question with a question. What is the difference between the computing industry and the commodity industries like the paper towel industry?

The main difference is that computing industry is an industry of new possibilities while the paper towel industry is an industry of replacement. You buy paper towels because you run out of them; but you buy new computing products because they get better.

And, it is not just the computers that are improving; it is the offered services and experiences that consistently improve. Can you even imagine running out of Microsoft Windows?

One of the primary drivers of this economic model is the exponential reduction in the cost of performing general-purpose computing. While in 1971, at the dawn of microprocessors, the price of 1 MIPS (Million Instruction Per Second) was roughly $5,000, it today is about 4¢. This is an exponential reduction in the cost of raw material for computing. This continuous and exponential reduction in cost has formed the basis of computing industry’s economy in the past four decades.

Read more »