Dave Doty on algorithmic self-assembly

 |   |  Conversations

Dave Doty portrait Dave Doty is a Senior Research Fellow at the California Institute of Technology. He proves theorems about molecular computing and conducts experiments implementing algorithmic molecular self-assembly with DNA.





Luke Muehlhauser: A couple years ago you wrote a review article on algorithmic self-assembly, and also created a video introduction to the subject. Your review article begins:

Self-assembly is the process by which small components automatically assemble themselves into large, complex structures. Examples in nature abound: lipids self-assemble a cell’s membrane, and bacteriophage virus proteins self-assemble a capsid that allows the virus to invade other bacteria. Even a phenomenon as simple as crystal formation is a process of self-assembly… Algorithmic self-assembly systems automate a series of simple growth tasks, in which the object being grown is simultaneously the machine controlling its own growth.

As an example, here’s a an electron microscope image of a Sierpinski triangle produced via algorithmic self-assembly of DNA molecules:

Read more »

Ariel Procaccia on economics and computation

 |   |  Conversations

Ariel Procaccia portrait Ariel Procaccia is an assistant professor in the Computer Science Department at Carnegie Mellon University. He received his Ph.D. in computer science from the Hebrew University of Jerusalem. He is a recipient of the NSF CAREER Award (2014), the (inaugural) Yahoo! Academic Career Enhancement Award (2011), the Victor Lesser Distinguished Dissertation Award (2009), and the Rothschild postdoctoral fellowship (2009). Procaccia was named in 2013 by IEEE Intelligent Systems to their biennial list of AI’s 10 to Watch. He is currently the editor of ACM SIGecom Exchanges, an associate editor of the Journal of AI Research (JAIR) and Autonomous Agents and Multi-Agent Systems (JAAMAS), and an editor of the upcoming Handbook of Computational Social Choice.

Read more »

Suzana Herculano-Houzel on cognitive ability and brain size

 |   |  Conversations

Suzana Herculano-Houzel portraitSuzana Herculano-Houzel is an associate professor at the Federal University of Rio de Janeiro, Brazil, where she heads the Laboratory of Comparative Neuroanatomy. She is a Scholar of the James McDonnell Foundation, and a Scientist of the Brazilian National Research Council (CNPq) and of the State of Rio de Janeiro (FAPERJ). Her main research interests are the cellular composition of the nervous system and the evolutionary and developmental origins of its diversity among animals, including humans; and the energetic cost associated with body size and number of brain neurons and how it impacted the evolution of humans and other animals.

Her latest findings show that the human brain, with an average of 86 billion neurons, is not extraordinary in its cellular composition compared to other primate brains – but it is remarkable in its enormous absolute number of neurons, which could not have been achieved without a major change in the diet of our ancestors. Such a change was provided by the invention of cooking, which she proposes to have been a major watershed in human brain evolution, allowing the rapid evolutionary expansion of the human brain. A short presentation of these findings is available at TED.com.

She is also the author of six books on the neuroscience of everyday life for the general public, a regular writer for the Scientific American magazine Mente & Cérebro since 2010, and a columnist for the Brazilian newspaper Folha de São Paulo since 2006, with over 200 articles published in this and other newspapers.

Luke Muehlhauser: Much of your work concerns the question “Why are humans smarter than other animals?” In a series of papers (e.g. 20092012), you’ve argued that recent results show that some popular hypotheses are probably wrong. For example, the so-called “overdeveloped” human cerebral cortex contains roughly the percentage of total brain neurons (19%) as do the cerebral cortices of other mammals. Rather, you argue, the human brain may simply be a “linearly scaled-up primate brain”: primate brains seem to have more economical scaling rules than do other mammals, and humans have the largest brain of any primate, and hence the most total neurons.

Your findings were enabled by a new method for neuron quantification developed at your lab, called “isotropic fractionator” (Herculano-Houzel & Lent 2005). Could you describe how that method works?

Suzana Herculano-Houzel: The isotropic fractionator consists pretty much of turning fixed brain tissue into soup – a soup of a known volume containing free cell nuclei, which can be easily colored (by staining the DNA that all nuclei contain) and thus visualized and counted under a microscope. Since every cell in the brain contains one and only one nucleus, counting nuclei is equivalent to counting cells. The beauty of the soup is that it is fast (total numbers of cells can be known in a few hours for a small brain, and in about one month for a human-sized brain), inexpensive, and very reliable – as much or more than the usual alternative, which is stereology.

Stereology, in comparison, consists of cutting entire brains into a series of very thin slices; processing the slices to allow visualization of the cells (which are otherwise transparent); delineating structures of interest; creating a sampling strategy to account for the heterogeneity in the distribution of cells across brain regions (a problem that is literally dissolved away in the detergent that we use in the isotropic fractionator); acquiring images of these small brain regions to be sampled; and actually counting cells in each of these samples. It is a process that can take a week or more for a single mouse brain. It is more powerful in the sense that spatial information is preserved (while the tissue is necessarily destroyed when turned into soup for our purposes), but on the other hand, it is much more labor-intensive and not appropriate for working on entire brains, because of the heterogeneity across brain parts.
Read more »

Martin Hilbert on the world’s information capacity

 |   |  Conversations

Martin Hilbert portraitMartin Hilbert pursues a multidisciplinary approach to understanding the role of information, communication, and knowledge in the development of complex social systems. He holds doctorates in Economics and Social Sciences, and in Communication, a life-long appointment as Economic Affairs Officer of the United Nations Secretariat, and is part of the faculty of the University of California, Davis. Before joining UCD he created and coordinated the Information Society Programme of United Nations Regional Commission for Latin America and the Caribbean. He provided hands-on technical assistance to Presidents, government experts, legislators, diplomats, NGOs, and companies in over 20 countries. He has written several books about digital development and published in recognized academic journals such as Science, Psychological Bulletin, World Development, and Complexity. His research findings have been featured in popular outlets like Scientific American, WSJ, Washington Post, The Economist, NPR, and BBC, among others.

Luke Muehlhauser: You lead an ongoing research project aimed at “estimating the global technological capacity to store, communicate and compute information.” Your results have been published in Science and other journals, and we used your work heavily in The world’s distribution of computation. What are you able to share in advance about the next few studies you plan to release through that project?

Read more »


 |   |  MIRI Strategy

I wrote a short profile of MIRI for a forthcoming book on effective altruism. It leaves out many important details, but hits many of the key points pretty succinctly:

The Machine Intelligence Research Institute (MIRI) was founded in 2000 on the premise that creating smarter-than-human artificial intelligence with a positive impact — “Friendly AI” — might be a particularly efficient way to do as much good as possible.

First, because future people vastly outnumber presently existing people, we think that “From a global perspective, what matters most (in expectation) is that we do what is best (in expectation) for the general trajectory along which our descendants develop over the coming millions, billions, and trillions of years.” (See Nick Beckstead’s On the Overwhelming Importance of Shaping the Far Future.)

Second, as an empirical matter, we think that smarter-than-human AI is humanity’s most significant point of leverage on that “general trajectory along which our descendants develop.” If we handle advanced AI wisely, it could produce tremendous goods which endure for billions of years. If we handle advanced AI poorly, it could render humanity extinct. No other future development has more upside or downside. (See Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies.)

Third, we think that Friendly AI research is tractable, urgent, and uncrowded.

Tractable: Our staff researchers and visiting workshop participants tackle open problems in Friendly AI theory, such as: How can we get an AI to preserve its original goals even as it learns new things and modifies its own code? How do we load desirable goals into a self-modifying AI? How do we ensure that advanced AIs will cooperate with each other and with modified versions of themselves? This work is currently at a theoretical stage, but we are making clear conceptual progress, and growing a new community of researchers devoted to solving these problems.

Urgent: Surveys of AI scientists, as well as our own estimates, expect the invention of smarter-than-human AI in the 2nd half of the 21st century if not sooner. Unfortunately, mathematical challenges such as those we need to solve to build Friendly AI often require several decades of research to overcome, with each new result building on the advances that came before. Moreover, because the invention of smarter-than-human AI is so difficult to predict, it may arrive with surprising swiftness, leaving us with little time to prepare.

Uncrowded: Very few researchers, perhaps fewer than five worldwide, are explicitly devoted to full-time Friendly AI research.

The overwhelming power of machine superintelligence will reshape our world, dominating other causal factors. Our intended altruistic effects on the vast majority of beings who will ever live must largely reach them via the technical design of the first self-improving smarter-than-human AIs. Many ongoing efforts — on behalf of better altruism, better reasoning, better global coordination, etc. — will play a role in this story, but we think it is crucial to also directly address the core challenge: the design of stably self-improving AIs with desirable goals. Failing to solve that problem will render humanity’s other efforts moot.

If our mission appeals to you, you can either fund our research or get involved in other ways.


Thomas Bolander on self-reference and agent introspection

 |   |  Conversations

Thomas Bolander portrait Thomas Bolander, Ph.D., is associate professor at DTU Compute, Technical University of Denmark.

He is doing research in logic and artificial intelligence with primary focus on the use of logic to model human-like planning, reasoning and problem solving.

Of special interest is the modelling of social phenomena and social intelligence with the aim of creating computer systems that can interact intelligently with humans and other computer systems.

Luke Muehlhauser: Bolander (2003) and some of your subsequent work studies paradoxes of self-reference in the context of logical/computational agents, as does e.g. Weaver (2013). Do you think your work on these paradoxes will have practical import for AI researchers who are designing computational agents, or are you merely using the agent framework to explore the more philosophical aspects of self-reference?

Read more »

Jonathan Millen on covert channel communication

 |   |  Conversations

Jonathan Millen portrait Jonathan Millen started work at the MITRE Corporation in 1969, after graduation from Rensselaer Polytechnic Institute with a Ph.D. in Mathematics. He retired from MITRE in 2012 as a Senior Principal in the Information Security Division. From 1997 to 2004 he enjoyed an interlude as a Senior Computer Scientist in the SRI International Computer Science Laboratory. He has given short courses at RPI Hartford, University of Bologna Summer School, ETH Zurich, and Taiwan University of Science and Technology. He organized the IEEE Computer Security Foundations Symposium (initially a workshop) in 1988, and co-founded (with S. Jajodia) the Journal of Computer Security in 1992. He has held positions as General and Program Chair of the IEEE Security and Privacy Symposium, Chair of the IEEE Computer Society Technical Committee on Security and Privacy, and associate editor of the ACM Transactions on Information and System Security.

The theme of his computer security interests is verification of formal specifications, of security kernels and cryptographic protocols. At MITRE he supported the DoD Trusted Product Evaluation Program, and later worked on the application of Trusted Platform Modules. He wrote several papers on information flow as applied to covert channel detection and measurement. His 2001 paper (with V. Shmatikov) on the Constraint Solver for protocol analysis received the SIGSAC Test of Time award in 2011. He received the ACM SIGSAC Outstanding Innovation award in 2009.

Luke Muehlhauser: Since you were a relatively early researcher in the field of covert channel communication, I’d like to ask you about the field’s early days, which are usually said to have begun with Lampson (1973). Do you know when the first covert channel attack was uncovered “in the wild”? My impression is that Lampson identified the general problem a couple decades before it was noticed being exploited in the wild; is that right?

Jonathan Millen: We might never know when real covert channel attacks were first noticed, or when they first occurred. When information is stolen by covert channel, the original data is still in place, so the theft can go unnoticed. Even if an attack is discovered, the victims are as reluctant as the perpetrators to acknowledge it. This is certainly the case with classified information, since a known attack is often classified higher than the information it compromises. The only evidence I have of real attacks before 1999 is from Robert Morris (senior), a pioneer in UNIX security, and for a while the Chief Scientist of the National Computer Security Center, which was organizationally within NSA. He stated at a security workshop that there had been real attacks. He wouldn’t say anything more; it was probably difficult enough to get clearance for that much.
Read more »

Wolf Kohn on hybrid systems control

 |   |  Conversations

Wolf Kohn portrait Dr. Wolf Kohn is the Chief Scientist at Atigeo, LLC, and a Research Professor in Industrial and Systems Engineering at the University of Washington. He is the founder and co-founder of two successful start-up companies: Clearsight Systems, Corp., and Kohn-Nerode, Inc. Both companies explore applications in the areas of advanced optimal control, rule-based optimization, and quantum hybrid control applied to enterprise problems and nano-material shaping control. Prof. Kohn, with Prof. Nerode of Cornell, established theories and algorithms that initiated the field of hybrid systems. Prof. Kohn has a Ph.D. in Electrical Engineering and Computer Science from MIT, at the Laboratory of Information and Decision Systems. Dr. Kohn is the author or coauthor of over 100 referred papes, 6 book chapters and with Nerode and Zabinsky has written a book in Distributed Cooperative inferencing. Dr. Kohn Holds 10 US and international patents.

Luke Muehlhauser: You co-founded the field of hybrid systems control with Anil Nerode. Anil gave his impressions of the seminal 1990 Pacifica meeting here. What were your own impressions of how that meeting developed? Is there anything in particular you’d like to add to Anil’s account?

Wolf Kohn: The discussion on the first day of the conference centered on the problem of how to incorporate heterogeneous descriptions of complex dynamical systems into a common representation for designing large scale automation. What came almost immediately were observations from Colonel Mettala and others that established as a goal the finding of alternatives to classic approaches based on combining expert systems with conventional control and system identification techniques.

These approaches did not lead to robust designs. More important, they did not lead to a theory for the systematic treatment of the systems DOD was deploying at the time. I was working on control architectures based on constraints defined by rules, so after intense discussions among the participants Nerode and I moved to a corner and came up with a proposal to amalgamate models by extending the concepts of automata theory and optimal control to characterize the evolution of complex dynamical systems in a manifold in which the topology was defined by rules of operation, and behavior constraints and trajectories were generated by variational methods. This was the beginning of what we would be defined later on as “hybrid systems.”
Read more »

As featured in:     CQ   Good   The Guardian   Popular Science   Scientific American