Mike Frank on reversible computing

 |   |  Conversations

Mike Frank portraitMichael P. Frank received his Bachelor of Science degree in Symbolic Systems from Stanford University in 1991, and his Master of Science and Doctor of Philosophy degrees in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 1994 and 1999 respectively. While at Stanford, he helped his team win the world championship in the 1990-91 International Collegiate Programming Competition sponsored by the Association for Computing Machinery. Over the course of his student years, he held research internships at IBM’s T.J. Watson Research Center, NASA’s Ames Research Center, NEC Research Institute, Stanford Research Institute, and the Center for Study of Language and Information at Stanford. He also spent the summer after his Freshman year as a software engineering intern at Microsoft. During 1998-1999, Mike stopped out of school for a year to work at a friend’s web startup (Stockmaster.com).

After graduation, he worked as a tenure-track Assistant Professor in the Computer and Information Science and Engineering department at the University of Florida from 1999-2004, and at the Electrical and Computer Engineering department at the Florida A&M University – Florida State University College of Engineering from 2004-2007. After an ill-fated attempt to start a business in 2007-2008, he returned to academia in a variety of short-term research and teaching positions in the Florida A&M Department of Physics and the FAMU-FSU College of Engineering. His present title is Associate in Engineering, and he spends most of his time supervising multidisciplinary senior engineering projects. Over the years, Dr. Frank’s research interests have spanned a number of different areas, including decision-theoretic artificial intelligence, DNA computing, reversible and quantum computing, market-based computing, secure election systems, and digital cash.

Luke Muehlhauser: Some long-term computing forecasts include the possibility of nanoscale computing, but efficient computing at that scale appears to require reversible computing due to the Landauer limit. Could you please explain what reversible computing is, and why it appears to be necessary for efficient computing beyond a certain point of miniaturization?


Mike Frank: Reversible computing refers to computation using logically reversible (invertible) transformations of the digital state of the machine, transformations that are also carried out using physical mechanisms that are (almost) thermodynamically reversible – that is, mechanisms that produce no, or negligibly small amounts of, physical entropy. Thermodynamic reversibility requires logical reversibility, since, if you carry out many-to-one transformations of the logical state, that necessitates a corresponding one-to-many transformation of the rest of the detailed physical state – or in other words, entropy production.

Reversible computing becomes important at a small scale because bit energies are becoming smaller, and are approaching the point (around 1 electron-volt) where they will not be able to be made much smaller without running into severe reliability problems due to thermal noise. But when you manipulate bits in an irreversible fashion, this essentially involves dissipating the entire bit energy to waste heat on each operation. Improvements in computer energy efficiency in the past have been driven mainly by reductions in bit energies, so if bit energy stops decreasing, it’s a big problem – the energy efficiency of traditional irreversible technology will level off and stop improving. Note this observation depends only on very fundamental thermodynamic considerations of thermal noise, and so it’s not affected by the details of whatever clever alternative nanoscale devices you may come up with. Your device technology could be based on superconductors, carbon nanotubes, nanophotonics, quantum interference transistors, I don’t care. As long as it’s a reliable, irreversible device, it will have to dissipate at least on the order of 1 electron volt (about 40 kT) per bit-operation in a room-temperature envronment, full stop, end of story.

With reversible computing, the situation is markedly different. Although it remains true that bit-energies in reliable devices are forced to level off in the 10s of kTs, if the device is reversible, then we no longer need to dissipate that entire bit energy on each operation. That’s the critical difference. In principle, for bit-operations that are performed using increasingly high-quality reversible transformations, the amount of energy that’s dissipated per operation can be made arbitrarily small; smaller than kT, much smaller even than kT ln 2 or 0.69 kT, which is the Landauer limit, the energy dissipation that’s fundamentally associated with the creation of a single bit’s worth of entropy. That’s the theoretical prediction, at least. But to actually realize practical, high-quality, high-performance, cost-efficient reversible computing below the Landauer limit is, I would say, one of the most difficult, hard-core engineering challenges of our time. That’s not to say it’s impossible; indeed, there has never been any valid proof from fundamental physics that it’s impossible, and to the contrary, there are many indications, from the research that’s been done to date, that suggest that it will probably be possible to achieve eventually. But it’s certainly not an easy thing to accomplish, given the state of engineering know-how at this point in time. A future computer technology that actually achieves high-quality, efficient reversible computing will require a level of device engineering that’s so precise and sophisticated that it will make today’s top-of-the-line device technologies seem as crude in comparison, to future eyes, as the practice of chiseling stone tablets looks to us today.


Luke: You write that “there has never been any valid proof from fundamental physics that [reversible computing is] impossible, and to the contrary, there are many indications… that suggest that it will probably be possible to achieve eventually.”

What are some of these indications suggesting that reversible computing should be possible to achieve eventually?


Mike: Yes, good question. First, there have been not only simulations, but also laboratory demonstrations illustrating that indeed, just as per the theoretical predictions, adiabatic switching1 of the voltage on a capacitor (for example) can indeed dissipate as little energy as desired, as the transition is carried out more slowly. However, these kinds of examples typically invoke external resonators to drive the transitions, and so this would not really count as a complete, self-contained system, until the design of those resonators is further fleshed out; but the design of high-quality resonators is a quite difficult engineering problem by itself.

In the meantime, we do already know of other self-contained systems that do undergo at least simple state transitions with negligible dissipation – consider, for example, a small, isolated diamond crystal or other solid nanostructure that is rotating or vibrating, in vacuum, under zero-gravity conditions. In principle, even in total isolation, such a structure will emit (extremely weak) gravity waves, and will therefore eventually settle into a stationary state (with no detectable vibration, and no spin except around axes around which there is perfect axial symmetry). But, for a small, rigid object, this gravitational settling process could take many billions of cycles (at least!), which, for a nanoscale object, can translate into far less than kT energy dissipation per cycle.

A related notion is that of a time crystal; these have recently been proposed as hypothetical examples of quantum systems that could theoretically cycle through a set of distinguishable states forever, even when fully relaxed into their quantum ground states. Time crystals themselves are a rather controversial concept. However, even if a true (i.e., perfect) time crystal turns out to be impossible, a system like a rotating solid object that takes a very long time to settle into a stationary state is definitely a realistic one.

The problem with that thought experiment is that a simple periodic motion, like that of a rotating or vibrating crystal, is not interesting computationally. For computational purposes, we want machines that traverse a very complexly-shaped trajectory through their configuration space, not just a short cycle. The configuration space of any useful computer has an enormous number of dimensions, but suppose, for the moment, that it had only three dimensions. Imagine now a polished ball, or particle (representing the current state of the machine), coasting along a hollowed-out channel through a solid block of material; the channel represents the constraints imposed by the structure of the computational mechanism (i.e., by the device physics). A complex reversible computation then corresponds, in this analogy, to the ball coasting smoothly, ballistically, along a long, twisty path through the material.

Can a physical system be engineered in such a way that the ball-particle (machine state) can coast through many twists and turns (many steps of computation) without losing a significant fraction of its initial dynamical (kinetic) energy? I feel that this is the fundamental question that the science of reversible computing has yet to answer definitively, one way or the other.

A good starting point would be to demonstrate any physical system that coasts along a long, complexly-shaped, deterministic (non-bifurcating) trajectory through configuration space – never mind what function it’s computing. One can always say, it’s just computing its own next state. It’s not clear why this kind of behavior should be impossible – even an everyday amusement-park roller-coaster illustrates that even macroscopic objects can coast through somewhat complex deterministic trajectories for at least a little while. And at the nanoscale, even the uncertainty principle only limits your certainty about when you arrive at the final state, not necessarily about the shape of the trajectory that you follow.

So, how far can we push this “roller-coaster” idea of ballistic computing? Maybe pretty far. But, until we have further developed a scientific and engineering discipline showing exactly how increasingly-efficient examples of such “roller-coaster” like constructs can be implemented, and wherein such constructs are designed to traverse not just a trajectory in simple 3D space, but rather in the multidimensional configuration space of a complex, nanostructured system composed of many interacting subsystems, which then might be doing something interesting in terms of computation, we will not know for sure that highly efficient reversible computing is really possible.

I personally expect that it is – if only for the lack of any sound proof that it is not (despite many attempts to rule it out having been made) – but, to really flesh out the discipline of how to engineer such systems is far more difficult, I now believe, than many of the field’s early enthusiasts may have anticipated.

What we really need, I think, are certain key theoretical breakthroughs, such as, for starters, a complete theoretical (but also realizable, i.e., not overly idealized) model of a self-contained quantum system with many interacting parts that dynamically evolves in such a way (that is, along a complex deterministic trajectory in configuration space).

I speculate that perhaps some sort of dynamical version of the quantum watchdog effect will be required in order to keep the various interacting components continuously in sync with each other, while they are simultaneously also coasting forward along complex trajectories in configuration space which are constrained, by the shape of the subsystems’ mutual interactions, to carry out interesting computations. But, I don’t know for sure if this “dynamical quantum watchdog” approach can be made to work.

Overall, to show how to fully realize reversible computing is a very thorny theoretical problem, one that I think will require serious attention from some of the foremost minds in quantum physics to solve definitively. If I had the research support, I might go back to school, bone up on my quantum theory, and try to solve it myself. But unfortunately, basic research funding for this field has been sorely lacking, in my experience, and nowadays, I have a family to support.


Luke: Why do you think reversible computing has been unable thus far to attract significant research funding and researcher time?


Mike: That is a good question. Part of the reason, I think, is that there has been a lot of misinformation and misconceptions that have circulated around about this field for a while. For example, there is a sort of scholarly rumor or “urban legend” circulating around claiming that John von Neumann, a famous pioneer of computer architecture and quantum theory, had proved at some point that information processing with less than kT ln 2 dissipation per binary “decision” was impossible. But, there is no real substance behind this legend; all that we actually have from von Neumann on this is one brief remark he made during a lecture that was not backed up by any analysis. He never published any peer-reviewed journal article on this topic, possibly because he realized that it was a mistake and had never actually proved it. Probably he was implicitly assuming that decision-making implied the creation of entropy, since an unknown outcome (of the decision) was being replaced by a known one, which would imply entropy creation elsewhere to satisfy the 2nd law of thermodynamics. But of course, a decision-making process can be deterministic, as most computations are; if the outcome is predetermined by the input, then there need be no change in entropy in the decision-making process. And, even if you want the outcome of the decision-making process to be random, that only requires moving a bit of entropy from the environment into the computer, not generating any new entropy. My feeling is that von Neumann simply hadn’t thought all this through carefully enough when he first made that remark, and that, if he had ever been exposed to the concept of reversible computing during his lifetime, he would have immediately said, “Oh, of course.”

A second rumor or urban legend against reversible computing is the claim that it would violate the central theorems proved by Claude Shannon, pioneer of the mathematical theory of information and communication, concerning the minimum power required for communication, at a given bandwidth and reliability. However, I personally have meticulously searched every single page of Shannon’s entire collected works, and absolutely nowhere in his published work did he ever address power dissipation, not even once!  Absolutely all of his work only addresses power transmitted, but nowhere do his theorems establish that the power contained in a signal cannot later be recovered by suitable mechanisms. In the design of reversible computing mechanisms, we explicitly show how the energy contained in any given pattern of physically encoded bits can be recovered, for example, by using the time-reverse of the exact same reversible process that created that bit-string in the first place. Again, if Shannon were alive today, I’m sure he would look at the principles of reversible computing and say, “Yes, of course that works.”

So, there is a bias against reversible computing that is based on the widespread misunderstanding or misinterpretation of the work of these respected pioneers. Stacked on top of this, there have been many more recent attempts to prove reversible computing impossible, but all of the supposed “proofs” contain either fallacies in reasoning or invalid assumptions, which have been shown definitively to be incorrect – I have a list of about 15 of these erroneous arguments, and the results showing that they’re clearly wrong, in one of my talks. But, each time the flaw in one of these skeptical arguments is pointed out, disproving its core objection definitively using a clear counterexample, the skeptics just keep coming up with new and different (but still flawed) arguments against reversible computing. Some of the skeptics are perhaps motivated by the desire not to lose face and admit that they’ve been mistaken in their previous arguments. I think that probably, the only way that many of these reversible computing skeptics will ever be convinced is if an already commercially-successful reversible computer is staring them in the faces – then they will no longer be able to deny it. A mere theoretical model will never satisfy the hard-core skeptics – and perhaps that’s as it should be. After all, all of the theory in the world is useless until we are able to use it to start building practical machines. And to be fair, the challenges that still need to be faced in order to make reversible computing practical are substantial. But, the prevalent attitude does make it very difficult to get research funding. And without substantial funding – or top-level researchers who have the free time and inclination to solve the remaining foundational problems definitively – these problems will never be solved, and reversible computing will never become more than an obscure academic curiosity. But, if adequate attention is paid to solving the key remaining problems, reversible computing has the potential to become the foundation of 21st century computer engineering. Certainly, we cannot make more than a modest amount of further progress in the power-performance of most computing applications without it.


Luke: Can you say a bit about the history of reversible computing thus far? What were the key pieces of conceptual progress after the initial demonstration of the Landauer limit? How did they build on each other, or on other results in physics or computing?


Mike: I’ll summarize a few of the major historical developments; but keep in mind, any brief history is bound to be incomplete. The first major theoretical development in reversible computing after Landauer’s analysis was his protege Charlie Bennett’s 1973 paper showing conclusively that irreversible operations are not strictly required for computation, that is, any computation can be carried out by use of only reversible transformations of the logical state of the machine. At worst, you might need to irreversibly initialize, just once, whatever new storage space you might want to permanently dedicate to preserve the final output of the computation – but, all of the intermediate calculations can be carried out reversibly in temporary space which can be reused by other computations. The basic technique basically involves temporarily storing all of the intermediate data that would otherwise be thrown away, so it is somewhat space-intensive, but later, in 1989, Bennett showed that more space-efficient variations on the technique were possible. Finally, it was even shown in 1997 (by Lange et al.) that reversible computing in linear space (the same as an irreversible machine) is possible, although that method is not time-efficient. It is probably the case that general reversible computations do require some amount of overhead in either space or time complexity; indeed, Ammer and I proved rigorously that this is true in a certain limited technical context. But, the overheads of reversible algorithms can theoretically be overwhelmed by their energy-efficiency benefits, to improve overall cost-performance for large-scale computations.

In terms of practical implementations, some early conceptual work was done by Fredkin and Toffoli at MIT in a 1978 proposal suggesting the use of inductors and capacitors to shuttle energy around between devices in a circuit to implement logic in a (near) dissipationless fashion. In the mid-80s, Charles Seitz and colleagues at Caltech came up with an improved technique in which inductors could be shared between logic elements and brought off-chip, but they didn’t answer the question of whether any arbitrary logical function could be implemented using their technique. In the early 1990s, Koller and Athas developed a general adiabatic logic family, but it was only able to handle combinational (not sequential) logic. Finally, in 1993 Younis and Knight at MIT developed their general-purpose, adiabatic, sequential Split-Level Charge Recovery Logic – although even that one still contained a small bug limiting its energy efficiency, which we found and fixed in the late 90s, during my thesis work. I also invented another, simpler (and bug-free) universal adiabatic logic family called 2LAL at the University of Florida in 20002. Of course, by then, many, many other research groups were also pursuing adiabatic logic, so I won’t attempt to credit all of the many contributions to the field that have been made in the meantime. However, I will say that most of the published techniques that I’ve seen for implementing adiabatic logic in CMOS (that is, using standard transistors) contain conceptual flaws of one kind or another which unnecessarily limit their energy efficiency. I have not seen any other CMOS-based approach yet that is as efficient as 2LAL. In detailed simulations at UF, we found that3 sequential 2LAL circuits could dissipate as little as 1 electron volt of energy per transistor per cycle, limited only by the leakage current in the particular device family we were simulating. This figure could be made even lower if other device families with reduced leakage were used.

People have also proposed alternative implementations of reversible computing not using transistors at all; for example, there is Craig Lent’s work at Notre Dame on quantum-dot cellular automata, and there is research that’s been done by several groups on reversible logic in superconducting circuits4. However, these alternative approaches also present their own challenges; and I’d say it’s not yet clear whether they will become viable successors to CMOS; but, at least they are trying, and are paying attention to these issues! Whereas, the vast majority of other nanocomputing proposals that have been made, namely, all of the ones that totally ignore reversible computing, are all fundamentally doomed to failure, in that they will never be able to get many orders of magnitude beyond the limits of CMOS in terms of their energy efficiency – since they all ignore the fundamental limit on the energy efficiency of irreversible devices that follows from the requirement to maintain high reliability despite thermal noise. If you want to develop a new logic technology that has any hope of being successful, in the sense of having power-performance that can potentially scale many orders of magnitude (and not, at best, just 1 or 2) beyond the limits of CMOS, then designing your devices around reversible computing principles is not optional – it’s an absolute necessity! Unfortunately, in my experience, there are far too few device physicists who seem to understand this basic fact.


Luke: If a researcher wanted to make progress toward reversible computing, where would you recommend they start looking? What are the most promising avenues of research at this point, do you think?


Mike: Well, as I mentioned earlier, I think that some fundamental breakthroughs in basic theory are still needed. We are still lacking a comprehensive theoretical model demonstrating how a realistic quantum-mechanical system composed of many interacting subsystems can be made to coast along a complexly-constrained, deterministic trajectory through configuration space with negligible entropy increase per operation. Reversible computing is somewhat easier than full-blown quantum computing, in that we don’t have to maintain delicate quantum superpositions of entangled states. Instead, we can presumably rely only on naturally stable (a.k.a. “classical”) states, or “pointer states.” But, accurately modeling the dynamical behavior of microscopic physical systems still requires a deep understanding of quantum mechanics; such an understanding is especially necessary for the design of nanoscale devices that could potentially serve as effective components of an efficient reversible computing system. If I were starting over in my career, I would begin by giving myself a comprehensive, top-of-the-line, world-class education in quantum mechanics, and then I’d re-develop the fundamental principles of reversible computing technology bottom-up, building on that foundation–with an emphasis on creating self-contained, energy-efficient, ballistic reversible computational mechanisms that are as simple as possible to design and manufacture. It is a big challenge, but I think that someone will show us the way eventually.


Luke: Thanks, Mike!


  1. See slide 19 of Snider et al. (2013)
  2. See slides #89-93 of Frank (2006a)
  3. See slide #27 of Frank (2006b)
  4. See Plourde (2012); Ren (2012); Semenov (2012; Ustinov (2012)