Johann Schumann on high-assurance systems

 |   |  Conversations

Johann Schumann portraitDr. Johann Schumann is a member of the Robust Software Engineering Group RSE at NASA Ames. He obtained his habilitation degree (2000) from the Technische Universität München, Germany on application of automated theorem provers in Software Engineering. His PhD thesis (1991) was on high-performance parallel theorem provers. Dr. Schumann is engaged in research on software and system health management, verification and validation of advanced air traffic control algorithms and adaptive systems, statistical data analysis of air traffic control systems and UAS incident data, and the generation of reliable code for data analysis and state estimation. Dr. Schumann’s general research interests focus on the application of formal and statistical methods to improve design and reliability of advanced safety- and security-critical software.

He is employed by SGT, Inc. as Chief Scientist Computational Sciences.

Read more »

Sandor Veres on autonomous agents

 |   |  Conversations

Sandor Veres portrait Professor Sandor Veres was born and educated in Hungary as applied mathematician. He completed his PhD in dynamical modelling of stochastic systems in 1983 and worked in industry on computer controlled systems. In 1987-1988 he received two consecutive scholarships at Imperial College London and at Linacre College Oxford.

Between 1989-1999 he was lecturer at the Electronic and Electrical engineering department at the University of Birmingham and pursued research in system identification, adaptive control and embedded electronic control systems. In 2000 he joined the University of Southampton to do work in the areas of active vibration control systems, adaptive and learning systems, satellite formation flying and autonomous control. Since 2002 he has held a chair in control systems engineering and was chairman of IFAC Technical Committee of Adaptive and Learning Systems.

At Southampton he has established the Centre for Complex Autonomous Systems Engineering where he is now visiting professor. Today his main research interest is agent-based control systems and he is leading the Autonomous Systems and Robotics Group at the Department of Automatic Control at the University of Sheffield since 2013. He published about 200 papers, authored 4 books and co-authored numerous software packages.

Read more »

New Paper: “Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem”

 |   |  Papers

robust cooperation first pageWe’ve released a new paper recently accepted to the MIPC workshop at AAAI-14: “Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem” by LaVictoire et al. This paper is essentially a shortened version of Barasz et al. (2014). For the history of the key results therein, see Robust Cooperation: A Case Study in Friendly AI Research.

Abstract of the new paper:

Applications of game theory often neglect that real-world agents normally have some amount of out-of-band information about each other. We consider the limiting case of a one-shot Prisoner’s Dilemma between algorithms with read access to one anothers’ source code. Previous work has shown that cooperation is possible at a Nash equilibrium in this setting, but existing constructions require interacting agents to be identical or near-identical. We show that a natural class of agents are able to achieve mutual cooperation at Nash equilibrium without any prior coordination of this sort.

 

Christof Koch and Stuart Russell on machine superintelligence

 |   |  News

Recently, Science Friday (hosted by Ira Flatow) featured an interview (page, mp3) with Christof Koch and Stuart Russell about machine superintelligence. Christof Koch is the Chief Scientific Officer of the Allen Institute for Brain Science, and Stuart Russell is a computer science professor at UC Berkeley, and co-author of the world’s most-used AI textbook.

I was glad to hear both distinguished guests take seriously the opportunities and risks of AGI. Those parts of the conversation are excerpted below:

Read more »

Exponential and non-exponential trends in information technology

 |   |  Analysis

Co-authored with Lila Rieber.

In The Singularity is Near, Ray Kurzweil writes that “every aspect of information and information technology is growing at an exponential pace.”1 In Abundance, the authors list eight fields — including nanomaterials, robotics, and medicine — as “exponentially growing fields.”2 The Second Machine Age says that “technical progress” in general is “improving exponentially.”3

These authors are correct to emphasize that exponential trends in technological development are surprisingly common (Nagy et al. 2013), and that these trends challenge the wisdom of our built-in heuristic to ignore futures that sound absurd. (To someone in the 1980s, the iPhone is absurd. To us, it is an affordable consumer good.)

Unfortunately, these and other popular discussions of “exponential technologies” are often very succinct and therefore ambiguous, resulting in public and professional misunderstanding.4 I (Luke) regularly encounter people who have read the books above and come away with the impression that all information technologies show roughly exponential trends all the time. But this isn’t true unless you have a very broad concept of what counts as “roughly exponential.”

So, without speculating much about what Kurzweil & company intend to claim, we’ll try to clear up some common misunderstandings about exponential technologies by showing a few examples of exponential and not-so-exponential trends in information technology. A more thorough survey of trends in information technology must be left to other investigators.5

Read more »


  1. Page 85. In the same book, he also writes that “we see ongoing exponential growth of every aspect of information technology, including price-performance, capacity, and rate of adoption.” (p. 377). In How to Create a Mind, Kurzweil writes that “In the course of my investigation, I made a startling discovery: If a technology is an information technology, the basic measures of price/performance and capacity… follow amazingly precise exponential trajectories” (p. 254). 
  2. Page 57. In general, Diamandis & Kotler seem to agree with Kurzweil that all information technologies experience exponential growth curves. E.g. on page 99 they write that “Although [some agroecological] practices themselves look decidedly low tech, all the fields they’re informed by are information-based sciences and thus on exponential growth curves,” and on page 190 they write that “almost every component of medicine is now an information technology and therefore on an exponential trajectory.” 
  3. Page 10. The authors also seem to expect exponential trends for anything that becomes a digital process: “…batteries… haven’t improved their performance at an exponential rate because they’re essentially chemical devices, not digital ones…” (p. 52). 
  4. E.g. Kurzweil seems to use a fairly loose definition of “exponential.” For example in Kurzweil (2001) he gives this chart of ISP cost-performance as an example exponential trend. Sometimes this seems to cause confusion in dialogue. For example, in response to Ilkka Tuomi’s criticisms (2002, 2003) of claims of exponential trends in computing, Kurzweil wrote that if Tuomi were correct, “I would have to conclude that the one-quarter MIPS computer costing several million dollars that I used at MIT in 1967 and the 1000 MIPS computer that I purchased recently for $2,000 never really existed… I admire his tenacity in attempting to prove that the world of information technology is flat (i.e., linear).” But Tuomi’s views don’t entail that, and Tuomi didn’t say that trends in information technology have been linear. The conflict appears to stem from the fact that Tuomi was using “exponential” in the strict sense, while Kurzweil was using the term in a very loose sense. This becomes clearer in Tuomi’s reply to Kurzweil
  5. Our thanks to Jonah Sinick for his assistance in researching this post. 

Benjamin Pierce on clean-slate security architectures

 |   |  Conversations

Benjamin C. Pierce portrait Benjamin C. Pierce is Henry Salvatori Professor of Computer and Information Science at the University of Pennsylvania and a Fellow of the ACM. His research interests include programming languages, type systems, language-based security, computer-assisted formal verification, differential privacy, and synchronization technologies. He is the author of the widely used graduate textbooks Types and Programming Languages and Software Foundations.

He has served as co-Editor in Chief of the Journal of Functional Programming, as Managing Editor for Logical Methods in Computer Science, and as editorial board member of Mathematical Structures in Computer ScienceFormal Aspects of Computing, and ACM Transactions on Programming Languages and Systems. He is also the lead designer of the popular Unison file synchronizer.

Luke Muehlhauser: I previously interviewed Greg Morrisett about the SAFE project, and about computer security in general. You’ve also contributed to the SAFE project, gave an “early retrospective” talk on it (slides), and I’d like to ask you some more detailed questions about it.

In particular, I’d like to ask about the “verified information-flow architecture” developed for SAFE. Can you give us an overview of the kinds of information flow security properties you were able to prove about the system?


Benjamin C. Pierce: Sure. First, to remind your readers: SAFE is a clean-slate design of a new hardware / software stack whose goal is to build a network host that is highly resilient to cyber-attack. One pillar of the design is pervasive mechanisms for tracking information flow. The SAFE hardware offers fine-grained tagging and efficient propagation and combination of tags on each instruction dispatch. The operating system virtualizes these generic facilities to provide an “information-flow abstract machine,” on which user programs run.

Formal verification has been part of the SAFE design process right from the beginning. We’d originally hoped to be able to verify the actual running code of the OS in the style of sel4, but we found that the codebase was too big and moving too fast for this to be practical with a small verification team. Instead, we’ve developed a methodology that combines full formal verification of models of the system’s key features with “property-based random testing” (à la QuickCheck) of richer subsets of the system’s functionality.

Our most interesting formal proof so far shows that the key security property of the information-flow abstract machine — the fact that a program’s secret inputs cannot influence its public outputs — is correctly preserved by our implementation on (a simpified version of) the SAFE hardware. This is interesting because the behavior of the abstract machine is achieved by a fairly intricate interplay between a hardware “rule cache” and a software layer that fills the cache as needed by consulting a symbolic representation of the current security policy. Since this mechanism lies at the very core of the SAFE architecture’s security guarantees, we wanted to be completely sure it is correct (and it is!).

Read more »

Michael Fisher on verifying autonomous systems

 |   |  Conversations

Michael Fisher portrait Michael Fisher is a professor of Computer Science, specialising in logical methods and automated formal verification, and Director of the multi-disciplinary Centre for Autonomous Systems Technology at the University of Liverpool and member of the Logic and Computation group in the Department of Computer Science.

Professor Fisher is also a Fellow of both the BCS and the IET, and a member of the UK Computing Research Committee. He is Chair of the Department’s Industrial Liaison Committee.

Luke Muehlhauser: In “Verifying Autonomous Systems” (2013), you and your co-authors summarize some recent approaches to formally verifying autonomous systems. At one point, you write:

Many autonomous systems, ranging over unmanned aircraft, robotics, satellites, and even purely software applications, have a similar structure, namely layered architectures as summarized in Figure 1. Although purely connectionist/sub-symbolic architectures remain prevalent in some areas, such as robotics, there is a broad realization that separating out the important/difficult choices into an identifiable entity can be very useful for development, debugging, and analysis. While such layered architectures have been investigated for many years they appear increasingly common in autonomous systems.

The accompanying figure is:

Figure 1. Typical hybrid autonomous figure architecture—with suitable analysis techniques noted.

What can you say about how common you think something like your “typical hybrid autonomous system architecture” is? E.g. do you think that kind of architecture is in use in ~20 different real-world applications, or ~200 different real-world applications, or perhaps more? Do you think there’s a trend in favor of such architectures? Is there a trend toward such architectures even in robotics, do you think?

Read more »

Harry Buhrman on quantum algorithms and cryptography

 |   |  Conversations

Harry Buhrman portrait Harry Buhrman is head of the research group ‘Algorithms and Complexity’ at the Centrum Wiskunde & Informatica, Amsterdam, which he joined in 1994. Since 2000 he also has a joint appointment as full professor of computer science at the University of Amsterdam. Buhrman’s research focuses on quantum computing, algorithms, complexity theory, and computational biology. In 2003 he obtained a prestigious Vici award and was coordinator of several national and international projects.

Buhrman is editor of several international journals and is member of various advisory and scientific boards, such as the advisory board of the Institute for Quantum Computing (Waterloo, Canada).

Luke Muehlhauser: In theory, position-based quantum cryptography could ensure that certain tasks could only be performed at certain locations:

For example, a country could send a message in such a way that it can only be deciphered at the location of its embassy in another country.

In Buhrman et al. (2011), you and your co-authors showed that

if collaborating adversaries are allowed to pre-share an arbitrarily large entangled quantum state, then… position-based cryptography… is impossible… in the quantum setting.

On the positive side, we show that for adversaries that are restricted to not share any entangled quantum states, secure position-verification is achievable. Jointly, these results suggest the interesting question whether secure position-verification is possible in the bounded case of a bounded amount of entanglement.

What is the current state of our knowledge on whether secure position-verification is possible given a bounded amount of entanglement?


Harry Buhrman: Classically such schemes will always be insecure, also if one restricts the running time or other resources of the adversaries. Quantumly the situation is more complicated, although we and others have shown that with an exponential amount of entanglement, also quantum schemes can be broken and are insecure. The quest is thus on for schemes that 1) can be easily implemented by the honest parties 2) require unreasonable amounts of entanglement for adversaries to break the scheme. We have seen some partial progress along these lines. For example a paper by Beigi and Koenig shows that there are schemes where a linear amount of entanglement is necessary to break them. It turns out that the question whether such schemes exists and if they do, prove for certain schemes that they are secure, is a very hard problem that we are currently working on. It turns out that the approach we have taken implies, if successful, results in classical complexity theory, and thus are very hard. On the other hand, it may still be possible that there simply do not exist schemes that are secure in this sense.
Read more »