Aaron Tomb on crowd-sourced formal verification

 |   |  Conversations

Aaron Tomb portrait Aaron Tomb is a Principal Investigator at Galois, where his work includes research, development, and project leadership in the area of automated and semi-automated techniques for analysis of software, including type systems, defect detection tools, formal verification, and more general software development tools based on deep analysis of program semantics. He joined Galois in 2007, and received a Ph.D. in Computer Science from the University of California, Santa Cruz on a technique for identifying inconsistent assumptions in software source code.

Luke Muehlhauser: DARPA’s Crowd Sourced Formal Verification (CSFV) program “aims to investigate whether large numbers of non-experts can perform formal verification faster and more cost-effectively than conventional processes.” To that end, the program created Verigames.com, the home of several free online puzzle games.

Read more »

Lennart Beringer on the Verified Software Toolchain

 |   |  Conversations

Lennart Beringer portrait Lennart Beringer is an Associate Research Scholar at Princeton University, where he uses interactive proof assistants to develop provably correct compilers, program analyses, and other software verification tools. Previously, he held research appointments at the Ludwig-Maximilians-University Munich and the University of Edinburgh, where he developed proof-carrying-code techniques for mobile code architectures, focusing on properties of resource consumption and information flow security. He received his PhD degree from the University of Edinburgh for a language-based analysis of an idealized asynchronous processor architecture. In 2012, he served as the co-chair of the Third International Conference on Interactive Theorem Proving (ITP).

Read more »

Johann Schumann on high-assurance systems

 |   |  Conversations

Johann Schumann portraitDr. Johann Schumann is a member of the Robust Software Engineering Group RSE at NASA Ames. He obtained his habilitation degree (2000) from the Technische Universität München, Germany on application of automated theorem provers in Software Engineering. His PhD thesis (1991) was on high-performance parallel theorem provers. Dr. Schumann is engaged in research on software and system health management, verification and validation of advanced air traffic control algorithms and adaptive systems, statistical data analysis of air traffic control systems and UAS incident data, and the generation of reliable code for data analysis and state estimation. Dr. Schumann’s general research interests focus on the application of formal and statistical methods to improve design and reliability of advanced safety- and security-critical software.

He is employed by SGT, Inc. as Chief Scientist Computational Sciences.

Read more »

Sandor Veres on autonomous agents

 |   |  Conversations

Sandor Veres portrait Professor Sandor Veres was born and educated in Hungary as applied mathematician. He completed his PhD in dynamical modelling of stochastic systems in 1983 and worked in industry on computer controlled systems. In 1987-1988 he received two consecutive scholarships at Imperial College London and at Linacre College Oxford.

Between 1989-1999 he was lecturer at the Electronic and Electrical engineering department at the University of Birmingham and pursued research in system identification, adaptive control and embedded electronic control systems. In 2000 he joined the University of Southampton to do work in the areas of active vibration control systems, adaptive and learning systems, satellite formation flying and autonomous control. Since 2002 he has held a chair in control systems engineering and was chairman of IFAC Technical Committee of Adaptive and Learning Systems.

At Southampton he has established the Centre for Complex Autonomous Systems Engineering where he is now visiting professor. Today his main research interest is agent-based control systems and he is leading the Autonomous Systems and Robotics Group at the Department of Automatic Control at the University of Sheffield since 2013. He published about 200 papers, authored 4 books and co-authored numerous software packages.

Read more »

New Paper: “Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem”

 |   |  Papers

robust cooperation first pageWe’ve released a new paper recently accepted to the MIPC workshop at AAAI-14: “Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem” by LaVictoire et al. This paper is essentially a shortened version of Barasz et al. (2014). For the history of the key results therein, see Robust Cooperation: A Case Study in Friendly AI Research.

Abstract of the new paper:

Applications of game theory often neglect that real-world agents normally have some amount of out-of-band information about each other. We consider the limiting case of a one-shot Prisoner’s Dilemma between algorithms with read access to one anothers’ source code. Previous work has shown that cooperation is possible at a Nash equilibrium in this setting, but existing constructions require interacting agents to be identical or near-identical. We show that a natural class of agents are able to achieve mutual cooperation at Nash equilibrium without any prior coordination of this sort.

 

Christof Koch and Stuart Russell on machine superintelligence

 |   |  News

Recently, Science Friday (hosted by Ira Flatow) featured an interview (page, mp3) with Christof Koch and Stuart Russell about machine superintelligence. Christof Koch is the Chief Scientific Officer of the Allen Institute for Brain Science, and Stuart Russell is a computer science professor at UC Berkeley, and co-author of the world’s most-used AI textbook.

I was glad to hear both distinguished guests take seriously the opportunities and risks of AGI. Those parts of the conversation are excerpted below:

Read more »

Exponential and non-exponential trends in information technology

 |   |  Analysis

Co-authored with Lila Rieber.

In The Singularity is Near, Ray Kurzweil writes that “every aspect of information and information technology is growing at an exponential pace.”1 In Abundance, the authors list eight fields — including nanomaterials, robotics, and medicine — as “exponentially growing fields.”2 The Second Machine Age says that “technical progress” in general is “improving exponentially.”3

These authors are correct to emphasize that exponential trends in technological development are surprisingly common (Nagy et al. 2013), and that these trends challenge the wisdom of our built-in heuristic to ignore futures that sound absurd. (To someone in the 1980s, the iPhone is absurd. To us, it is an affordable consumer good.)

Unfortunately, these and other popular discussions of “exponential technologies” are often very succinct and therefore ambiguous, resulting in public and professional misunderstanding.4 I (Luke) regularly encounter people who have read the books above and come away with the impression that all information technologies show roughly exponential trends all the time. But this isn’t true unless you have a very broad concept of what counts as “roughly exponential.”

So, without speculating much about what Kurzweil & company intend to claim, we’ll try to clear up some common misunderstandings about exponential technologies by showing a few examples of exponential and not-so-exponential trends in information technology. A more thorough survey of trends in information technology must be left to other investigators.5

Read more »


  1. Page 85. In the same book, he also writes that “we see ongoing exponential growth of every aspect of information technology, including price-performance, capacity, and rate of adoption.” (p. 377). In How to Create a Mind, Kurzweil writes that “In the course of my investigation, I made a startling discovery: If a technology is an information technology, the basic measures of price/performance and capacity… follow amazingly precise exponential trajectories” (p. 254). 
  2. Page 57. In general, Diamandis & Kotler seem to agree with Kurzweil that all information technologies experience exponential growth curves. E.g. on page 99 they write that “Although [some agroecological] practices themselves look decidedly low tech, all the fields they’re informed by are information-based sciences and thus on exponential growth curves,” and on page 190 they write that “almost every component of medicine is now an information technology and therefore on an exponential trajectory.” 
  3. Page 10. The authors also seem to expect exponential trends for anything that becomes a digital process: “…batteries… haven’t improved their performance at an exponential rate because they’re essentially chemical devices, not digital ones…” (p. 52). 
  4. E.g. Kurzweil seems to use a fairly loose definition of “exponential.” For example in Kurzweil (2001) he gives this chart of ISP cost-performance as an example exponential trend. Sometimes this seems to cause confusion in dialogue. For example, in response to Ilkka Tuomi’s criticisms (2002, 2003) of claims of exponential trends in computing, Kurzweil wrote that if Tuomi were correct, “I would have to conclude that the one-quarter MIPS computer costing several million dollars that I used at MIT in 1967 and the 1000 MIPS computer that I purchased recently for $2,000 never really existed… I admire his tenacity in attempting to prove that the world of information technology is flat (i.e., linear).” But Tuomi’s views don’t entail that, and Tuomi didn’t say that trends in information technology have been linear. The conflict appears to stem from the fact that Tuomi was using “exponential” in the strict sense, while Kurzweil was using the term in a very loose sense. This becomes clearer in Tuomi’s reply to Kurzweil
  5. Our thanks to Jonah Sinick for his assistance in researching this post. 

Benjamin Pierce on clean-slate security architectures

 |   |  Conversations

Benjamin C. Pierce portrait Benjamin C. Pierce is Henry Salvatori Professor of Computer and Information Science at the University of Pennsylvania and a Fellow of the ACM. His research interests include programming languages, type systems, language-based security, computer-assisted formal verification, differential privacy, and synchronization technologies. He is the author of the widely used graduate textbooks Types and Programming Languages and Software Foundations.

He has served as co-Editor in Chief of the Journal of Functional Programming, as Managing Editor for Logical Methods in Computer Science, and as editorial board member of Mathematical Structures in Computer ScienceFormal Aspects of Computing, and ACM Transactions on Programming Languages and Systems. He is also the lead designer of the popular Unison file synchronizer.

Luke Muehlhauser: I previously interviewed Greg Morrisett about the SAFE project, and about computer security in general. You’ve also contributed to the SAFE project, gave an “early retrospective” talk on it (slides), and I’d like to ask you some more detailed questions about it.

In particular, I’d like to ask about the “verified information-flow architecture” developed for SAFE. Can you give us an overview of the kinds of information flow security properties you were able to prove about the system?


Benjamin C. Pierce: Sure. First, to remind your readers: SAFE is a clean-slate design of a new hardware / software stack whose goal is to build a network host that is highly resilient to cyber-attack. One pillar of the design is pervasive mechanisms for tracking information flow. The SAFE hardware offers fine-grained tagging and efficient propagation and combination of tags on each instruction dispatch. The operating system virtualizes these generic facilities to provide an “information-flow abstract machine,” on which user programs run.

Formal verification has been part of the SAFE design process right from the beginning. We’d originally hoped to be able to verify the actual running code of the OS in the style of sel4, but we found that the codebase was too big and moving too fast for this to be practical with a small verification team. Instead, we’ve developed a methodology that combines full formal verification of models of the system’s key features with “property-based random testing” (à la QuickCheck) of richer subsets of the system’s functionality.

Our most interesting formal proof so far shows that the key security property of the information-flow abstract machine — the fact that a program’s secret inputs cannot influence its public outputs — is correctly preserved by our implementation on (a simpified version of) the SAFE hardware. This is interesting because the behavior of the abstract machine is achieved by a fairly intricate interplay between a hardware “rule cache” and a software layer that fills the cache as needed by consulting a symbolic representation of the current security policy. Since this mechanism lies at the very core of the SAFE architecture’s security guarantees, we wanted to be completely sure it is correct (and it is!).

Read more »