Scott Aaronson on Philosophical Progress


 |   |  Conversations

Scott Aaronson portraitScott Aaronson is an Associate Professor of Electrical Engineering and Computer Science at MIT. Before that, he did a PhD in computer science at UC Berkeley, as well as postdocs at the Institute for Advanced Study, Princeton, and the University of Waterloo. His research focuses on the capabilities and limits of quantum computers, and more generally on the connections between computational complexity and physics. Aaronson is known for his blog as well as for founding the Complexity Zoo (an online encyclopedia of complexity classes); he’s also written about quantum computing for Scientific American and the New York Times. His first book, Quantum Computing Since Democritus, was published this year by Cambridge University Press. He’s received the Alan T. Waterman Award, the PECASE Award, and MIT’s Junior Bose Award for Excellence in Teaching.

Luke Muehlhauser: Though you’re best known for your work in theoretical computer science, you’ve also produced some pretty interesting philosophical work, e.g. in Quantum Computing Since Democritus, “Why Philosophers Should Care About Computational Complexity,” and “The Ghost in the Quantum Turing Machine.” You also taught a fall 2011 MIT class on Philosophy and Theoretical Computer Science.

Why are you so interested in philosophy? And what is the social value of philosophy, from your perspective?

Scott Aaronson: I’ve always been reflexively drawn to the biggest, most general questions that it seemed possible to ask. You know, like are we living in a computer simulation? if not, could we upload our consciousnesses into one? are there discrete “pixels” of spacetime? why does it seem impossible to change the past? could there be different laws of physics where 2+2 equaled 5? are there objective facts about morality? what does it mean to be rational? is there an explanation for why I’m alive right now, rather than some other time? What are explanations, anyway? In fact, what really perplexes me is when I meet a smart, inquisitive person—let’s say a mathematician or scientist—who claims NOT to be obsessed with these huge issues! I suspect many MIRI readers might feel drawn to such questions the same way I am, in which case there’s no need to belabor the point.

From my perspective, then, the best way to frame the question is not: “why be interested in philosophy?” Rather it’s: “why be interested in anything else?”

But I think the latter question has an excellent answer. A crucial thing humans learned, starting around Galileo’s time, is that even if you’re interested in the biggest questions, usually the only way to make progress on them is to pick off smaller subquestions: ideally, subquestions that you can attack using math, empirical observation, or both. For again and again, you find that the subquestions aren’t nearly as small as they originally looked! Much like with zooming in to the Mandelbrot set, each subquestion has its own twists and tendrils that could occupy you for a lifetime, and each one gives you a new perspective on the big questions. And best of all, you can actually answer a few of the subquestions, and be the first person to do so: you can permanently move the needle of human knowledge, even if only by a minuscule amount. As I once put it, progress in math and science — think of natural selection, Godel’s and Turing’s theorems, relativity and quantum mechanics — has repeatedly altered the terms of philosophical discussion, as philosophical discussion itself has rarely altered them! (Of course, this is completely leaving aside math and science’s “fringe benefit” of enabling our technological civilization, which is not chickenfeed either.)

On this view, philosophy is simply too big and too important to be confined to philosophy departments! Of course, the word “philosophy” used to mean the entire range of fundamental inquiry, from epistemology and metaphysics to physics and biology (which were then called “natural philosophy”), rather than just close textual analysis, or writing papers with names like “A Kripkean Reading of Wittgenstein’s Reading of Frege’s Reading of Kant.” And it seems clear to me that there’s enormous scope today for “philosophy” in the former sense — and in particular, for people who love working on the subquestions, on pushing the frontiers of neuroscience or computer science or physics or whatever else, but who also like to return every once in a while to the “deep” philosophical mysteries that motivated them as children or teenagers. Admittedly, there have been many great scientists who didn’t care at all about philosophy, or who were explicitly anti-philosophy. But there were also scientists like Einstein, Schrodinger, Godel, Turing, or Bell, who not only read lots of philosophy but (I would say) used it as a sort of springboard into science — in their cases, a wildly successful one. My guess would be that science ultimately benefits from both the “pro-philosophical” and the “anti-philosophical” temperaments, and even from the friction between them.

As for the “social value” of philosophy, I suppose there are a few things to say. First, the world needs good philosophers, if for no other reason than to refute bad philosophers! (This is similar to why the world needs lawyers, politicians, and soldiers.) Second, the Enlightenment seems like a pretty big philosophical success story. Philosophers like Locke and Spinoza directly influenced statesmen like Thomas Jefferson, in ways you don’t have to squint to see. Admittedly, philosophers’ positive influence on humankind’s moral progress is probably less today than in the 1700s (to put it mildly). And also, most of the philosophical questions that have obsessed me personally have been pretty thin in their moral implications. But that brings me to the third point: namely, to whatever extent you see social value in popularizing basic science — that is, in explaining the latest advances in cosmology, quantum information, or whatever else to laypeople — to that extent I think you also need to see social value in philosophy. For the popularizer doesn’t have the luxury of assuming the importance of the particular subquestion on which progress has been made. Instead, he or she constantly needs to say what the little tendrils currently being explored do (or just as importantly, don’t) imply about the whole fractal — and when you’re zooming out like that, it’s hard to avoid talking about philosophy.

Luke: You write that “usually the only way to make progress on [the big questions] is to pick off smaller subquestions: ideally, subquestions that you can attack using math, empirical observation, or both.” This is an idea you wrote about at greater length in one of your papers — specifically, in this passage:

whenever it’s been possible to make definite progress on ancient philosophical problems, such progress has almost always involved a [kind of] “bait-and-switch.” In other words: one replaces an unanswerable philosophical riddle Q by a “merely” scientific or mathematical question Q′, which captures part of what people have wanted to know when they’ve asked Q. Then, with luck, one solves Q′.

Of course, even if Q′ is solved, centuries later philosophers might still be debating the exact relation between Q and Q′! And further exploration might lead to other scientific or mathematical questions — Q′′, Q′′′, and so on — which capture aspects of Q that Q′ left untouched. But from my perspective, this process of “breaking off” answerable parts of unanswerable riddles, then trying to answer those parts, is the closest thing to philosophical progress that there is.

…A good replacement question Q′ should satisfy two properties: (a) Q′ should capture some aspect of the original question Q — so that an answer to Q′ would be hard to ignore in any subsequent discussion of Q, [and] (b) Q′ should be precise enough that one can see what it would mean to make progress on Q′: what experiments one would need to do, what theorems one would need to prove, etc.

What are some of your favorite examples of illuminating Q-primes that were solved within your own field, theoretical computer science?

Scott: It’s hard to know where to begin with this question! In fact, my 59-page essay Why Philosophers Should Care About Computational Complexity was largely devoted to cataloging the various “Q-primes” on which I think theoretical computer science has made progress. However, let me mention four of my favorites, referring readers to the essay for details:

(1) One of the biggest, oldest questions in the philosophy of science could be paraphrased as: “why is Occam’s Razor justified? when we find simple descriptions of past events, why do we have any grounds whatsoever to expect those descriptions to predict future events?” This, I think, is the core of Hume’s “problem of induction.” Now, I think theoretical computer science has contributed large insights to this question — including Leslie Valiant’s Probably Approximately Correct (PAC) learning model, for which he recently won the Turing Award; the notion of Vapnik-Chernonenkis (VC) dimension; and the notion of the universal prior from algorithmic information theory. In essence, these ideas all give you various formal models where Occam’s Razor provably works — where you can give “simplicity” a precise definition, and then see exactly why simple hypotheses are more likely to predict the future than complicated ones. Of course, a skeptic about induction could still ask: OK, but why are the assumptions behind these formal models justified? But to me, this represents progress! The whole discussion can now start from a more sophisticated place than before.

(2) One of the first questions anyone asks on learning quantum mechanics is, “OK, but do all these branches of the wavefunction really exist? or are they just mathematical constructs used to calculate probabilities?” Roughly speaking, Many-Worlders would say they do exist, while Copenhagenists would say they don’t. Of course, part of what makes the question slippery is that it’s not even completely clear what we mean by words like “exist”! Now, I’d say that quantum computing theory has sharpened the question in many ways, and actually answered some of the sharpened versions — but interestingly, sometimes the answer goes one way and sometimes it goes the other! So for example, we have strong evidence that quantum computers can solve certain specific problems in polynomial time that would require exponential time to solve using a classical computer. Some Many-Worlders, most notably David Deutsch, have seized on the apparent exponential speedups for problems like factoring, as the ultimate proof that the various branches of the wavefunction must literally exist: “if they don’t exist,” they ask, “then where was this huge number factored? where did the exponential resources to solve the problem come from?” The trouble is, we’ve also learned that a quantum computer could NOT solve arbitrary search problems exponentially faster than a classical computer could solve them — something you’d probably predict a QC could do, if you thought of all the branches of the wavefunction as just parallel processors. If you want a quantum speedup, then your problem needs a particular structure, which (roughly speaking) lets you choreograph a pattern of constructive and destructive interference involving ALL the branches. You can’t just “fan out” and have one branch try each possible solution — twenty years of popular articles notwithstanding, that’s not how it works! We also know today that you can’t encode more than about n classical bits into n quantum bits (qubits), in such a way that you can reliably retrieve any one of the bits afterward. And we have all lots of other results that make quantum-mechanical amplitudes feel more like “just souped-up versions of classical probabilities,” and quantum superposition feel more like just a souped-up kind of potentiality. I love how the mathematician Boris Tsirelson summarized the situation: he said that “a quantum possibility is more real than a classical possibility, but less real than a classical reality.” It’s an ontological category that our pre-mathematical, pre-quantum intuitions just don’t have a good name for.

(3) Many interesting philosophical puzzles boil down to what it means to know something: and in particular, to the difference between knowing something “explicitly” and knowing it only “implicitly.” For example, I mentioned in my essay the example of the largest “known” prime number. According to the Great Internet Mersenne Prime Search, that number is currently 2^57885161 – 1. The question is, why can’t I reply immediately that I know a bigger prime number: namely, “the first prime larger than 2^57885161 – 1″? I can even give you an algorithm to find my number, which provably halts: namely, starting from 2^57885161, try each number one by one until you hit a prime! Theoretical computer science has given us the tools to sharpen a huge number of questions of this sort, and sometimes answer them. Namely, we can say that to know a thing “explicitly” means, not merely to have ANY algorithm to generate the thing, but to have a provably polynomial-time algorithm. That gives us a very clear sense in which, for example, 2^57885161 – 1 is a “known” prime number while the next prime after it is not. And, in many cases where mathematicians vaguely asked for an “explicit construction” of something, we can sharpen the question to whether or not some associated problem has a polynomial-time algorithm. Then, sometimes, we can find such an algorithm or give evidence against its existence!

(4) One example that I didn’t discuss in the essay — but a wonderful one, and one where there’s actually been huge progress in the last few years — concerns the question of how we can ever know for sure that something is “random.” I.e., even if a string of bits passes every statistical test for randomness that we throw it at, how could we ever rule out that there’s some complicated regularity that we simply failed to find? In the 1960s, the theory of Kolmogorov complexity offered one possible answer to that question, but a rather abstract and inapplicable one: roughly speaking, it said we can consider a string “random enough for our universe” if it has no computable regularities, if there’s no program to output the string shorter than the string itself. More recently, a much more practical answer has come from the Bell inequality — and in particular, from the realization that the experimental violation of that inequality can be used to produce so-called “Einstein-certified random numbers.” These are numbers that are provably random, assuming only (a) that they were produced by two separated devices that produced such-and-such outputs in response to challenges, and (b) there was no faster-than-light communication between the devices. But it’s only within the last few years that computer scientists figured out how to implement this striking idea, in such a way that you get out more randomness than you put in. (Recently, two MIT grad students proved that, starting from a fixed “seed” of, let’s say, 100 random bits, you can produce unlimited additional random bits in this Einstein-certified way — see Infinite Randomness Expansion and Amplification with a Constant Number of Devices) And the experimental demonstration of these ideas is just getting started now. Anyway, I’m working on an article for American Scientist magazine about these developments, so rather than cannibalize the article, I’ll simply welcome people to read it when it’s done!

Luke: What do you think about philosophy the field — work published by people in philosophy departments, who publish mostly in philosophy journals like Mind and Noûs, who are writing mostly for other philosophers?

I’ve previously called philosophy a “diseased discipline,” for many reasons. For one thing, people working in philosophy-the-field tend to know strikingly little about the philosophical progress made in other fields, e.g. computer science or cognitive neuroscience. For another, books on the history of philosophy seem to be about the musings of old dead guys who were wrong about almost everything because they didn’t have 20th century science or math, rather than about actual philosophical progress, which is instead recounted in books like The Information.

Do you wish people in other fields would more directly try to use the tools of their discipline to make philosophical progress on The Big Questions? Do you wish philosophy-the-field would be reformed in certain ways? Would you like to see more crosstalk between disciplines about philosophical issues? Do you think that, as Clark Glymour suggested, philosophy departments should be defunded unless they produce work that is directly useful to other fields (as is the case with Glymour’s department)?

Scott: Well, let’s start with the positives of academic philosophy!

(1) I liked the philosophy of math and science courses that I took in college. Sure, I sometimes got frustrated by the amount of time spent on what felt like Talmudic exegesis, but on the other hand, those courses offered a scope for debating big, centuries-old questions that my math and science courses hardly ever did.

(2) These days, I go maybe once a year to conferences where I meet professional philosophers of science, and I’ve found my interactions with them stimulating and fun. Philosophers often listen to what you say more carefully than other scientists do, and they’re incredibly good at spotting hidden assumptions, imprecise use of language, that sort of thing. Also, philosophers of science tend to double in practice as science historians: they often know much, much more about what, let’s say, Einstein or Bohr or Godel or Turing wrote and believed than physicists and mathematicians themselves know.

(3) While my own reading of the philosophical classics has been woefully incomplete, I don’t feel like the time I spent with (say) Hume or J. S. Mill or William James or Bertrand Russell was wasted at all. You’re right that these “old dead guys” didn’t know all the math and science we know today, but then again, neither did Shakespeare or Dostoyevsky! I mean, sure, the central questions of philosophy have changed over time, and the human condition has changed as well: we no longer get confused over Zeno’s paradoxes or the divine right of kings, and we now have global telecommunications and the Pill. I just don’t think either human nature or human philosophical concerns have changed quickly enough for great literature on them written centuries ago to have ceased being great.

Having said all that, from what I’ve seen of academic philosophy, I do pretty much agree with your diagnoses of its “diseases.” By far the most important disease, I’d say, is the obsession with interpreting and reinterpreting the old masters, rather than moving beyond them. Back in college, after we’d spent an hour debating why this passage of Frege seemed to contradict that one, I’d sometimes want to blurt out: “so maybe he was having a bad day! I mean, he was also a raving misogynist and antisemite; he believed all kinds of things. Look, we’ve read Frege, we’ve learned from Frege, now can’t we just give the old dude a rest and debate the ground truth about the problems he was trying to solve?” Likewise, when I read books about the philosophy of physics or computing, it sometimes feels like I’m stuck in a time warp, as the contributors rehash certain specific debates from the 1930s over and over (say, about the Church-Turing Thesis or the Einstein-Podolsky-Rosen paradox). I want to shout, “enough already! why not help clarify some modern scientific debates—-say, about quantum computing, or string theory, or the black-hole firewall problem, ones where we don’t already know how everything turns out?” To be fair, today there are philosophers of science who are doing exactly that, and who have interesting and insightful things to say. That’s a kind of philosophy that I’d love to see more of, at the expense of the hermeneutic kind.

Now, regarding Clark Glymour’s suggestion that philosophy departments be defunded unless they produce work useful to other fields — from what I understand, something not far from that is already happening! As bad as our funding woes in the sciences might be, I think the philosophers have it a hundred times worse, with like a quadrillion applicants for every tenure-track opening. So it seems to me like the right question is not how much further those poor dudes should be defunded, but rather: what can philosophy departments do to make themselves more vibrant, places that scientists regularly turn to for clarifying insights, and that deans and granting agencies get excited about wanting to expand? As a non-philosopher, I hesitate to offer unsolicited “advice” about such matters, but I guess I already did in the previous paragraph.

One final note: none of the positive or hopeful things that I said about philosophy apply to the postmodern or Continental kinds. As far as I can tell, the latter aren’t really “philosophy” at all, but more like pretentious brands of performance art that fancy themselves politically subversive, even as they cultivate deliberate obscurity and draw mostly on the insights of Hitler and Stalin apologists. I suspect I won’t ruffle too many feathers here at MIRI by saying this.

Luke: Suppose a mathematically and analytically skilled student wanted to make progress, in roughly the way you describe, on the Big Questions of philosophy. What would you recommend they study? What should they read to be inspired? What skills should they develop? Where should they go to study?

Scott: The obvious thing to say is that, as a student, you should follow your talents and passions, rather than following the generic advice of some guy on the Internet who doesn’t even know you personally!

Having said that, I would think broadly about which fields can give you enough scope to address the “Big Questions of Philosophy.”  You can philosophize from math, computer science, physics, economics, cognitive science, neuroscience, and probably a bunch of other fields too.  (My colleague Seth Lloyd philosophizes left and right, from his perch in MIT’s Mechanical Engineering department.)  Furthermore, all of these fields have the crucial advantage that they’ll offer you a steady supply of “fresh meat”: that is, new and exciting empirical or theoretical discoveries in which you can participate, and that will give you something to philosophize ABOUT (not to mention, something to do when you’re not philosophizing).  If I were working in a philosophy department, I feel like I’d have to make a conscious and deliberate effort to avoid falling into a “hermeneutic trap,” where I’d spend all my time commenting on what other philosophers had said about the works of yet other philosophers, and where I’d seal myself off from anything that had happened in the world of science since (say) Godel’s Theorem or special relativity.  (Once again, though, if you find that your particular talents and passions are best served in an academic philosophy department, then don’t let some guy on the Internet stop you!)

Regardless of your major, I recommend taking a huge range of courses as an undergrad: math, computer science (both applied and theoretical), physics, humanities, history, writing, and yes, philosophy. Looking back on my own undergrad years, the most useful courses I took were probably my math courses, and that’s despite the fact that most of them were poorly taught!  Things like linear algebra, group theory, and probability have so many uses throughout science that learning them is like installing a firmware upgrade to your brain — and even the math you don’t use will stretch you in helpful ways.  After math courses, the second most useful courses I took were writing seminars — the kind where a small group of students reads and critiques one another’s writing, and the professor functions mostly as a moderator.  It was in such a seminar that I wrote my essay “Who Can Name the Bigger Number?“, which for better or worse, continues to attract more readers than anything else I’ve written in the fifteen years since.  One writing seminar, if it’s good, can easily be worth the whole cost of a college tuition.

If you’re the kind of person for whom this advice is intended, then you probably don’t have to be told to read widely and voraciously, anything you get curious about.  Don’t limit yourself to one genre, don’t limit yourself to stuff you agree with, and certainly don’t limit yourself to the assigned reading for your courses.  When I was an adolescent, my favorites were just what a nerd stereotyper might expect: science fiction (especially Isaac Asimov), books about programming and the software industry, and math puzzle books (especially Martin Gardner).  A few years later, I became obsessed with reading biographies of scientists, like Feynman, Ramanujan, Einstein, Schrodinger, Turing, Godel, von Neumann, and countless lesser luminaries.  I was interested in every aspect of their lives — in their working habits, their hobbies, their views on social and philosophical issues, their love lives — but, I confess, I was particularly interested in what they were doing as teenagers, so that I could compare to what I was doing and sort of see how I measured up.  At the same time, my reading interests were broadening to include politics, history, philosophy, psychology, and some contemporary fiction (I especially like Rebecca Goldstein).  It was only in grad school that I felt I’d sufficiently recovered from high-school English to tackle “real literature” like Shakespeare — but when I did, it was worth it.

As for where to study, well, the “tautological” answer is wherever will give you the best opportunities!  There are certain places, like Boston or the Bay Area, that are famous for having high concentrations of intellectual opportunity, but don’t go somewhere just because of what you’ve heard about the general atmosphere or prestige: particularly for graduate school, go where the particular people or programs are that resonate for you.  In quantum computing, for example, one of the centers of the world for the last decade has been Waterloo, Canada — a place many people hadn’t even heard of when I did my postdoc there eight years ago (though that’s changing now).  And one of the intellectually richest years of my life came when I attended The Clarkson School, a program that lets high-school students live and take courses at Clarkson University in Potsdam, NY.  (I went there when I was 15, and was looking for something less prison-like than high school.)  If, for what you personally want to do, there are better opportunities in Topeka, Kansas than at Harvard, go to Topeka.

Luke: Finally, I’d like to ask about which object-level research tactics — more specific than your general “bait and switch” strategy — you suspect are likely to help with philosophical research, or perhaps with theoretical research of any kind.

For example, some of the tactics we’ve found helpful at MIRI include:

  • When you’re confused about a fuzzy, slippery concept, try to build a simple formal model and push on it with the new tools then available to you. Even if the model doesn’t capture the complexity of the world, pushing things into the mathematical realm can lead to progress. E.g. the VNM axioms don’t exactly capture “rationality,” but it sure is easier to think clearly about rationality once you have them. Or: we’re confused about how to do principled reflective reasoning within an agent, so even though advanced AIs are unlikely to literally run into a “Löbian obstacle” to self-reflection, setting up the problem that way (in mathematical logic) can lead to some interesting insights in (e.g.) probabilistic metamathematics for reflective reasoning.
  • Look for tools from other fields that appear to directly map onto the phenomena you’re studying. E.g. model moral judgment as an error process amenable to Bayesian curve fitting.
  • Try to think of how your concept could be instantiated with infinite computing power. If you can’t do that, your concept might be fundamentally confused.
  • If you’re pretty familiar with modern psychology, then… When using your intuitions to judge between options, try to think about which cognitive algorithms could be generating those intuitions, and whether they are cognitive algorithms whose outputs you reflectively endorse.
  • To make the thing you’re studying clearer, look just next to it, and around it. Foer (2009) explains this nicely in the context of thinking about one’s values and vegetarianism: “A simple trick from the backyard astronomer: if you are having trouble seeing something, look slightly away from it. The most light-sensitive parts of our eyes (those we need to see dim objects) are on the edges of the region we normally use for focusing. Eating animals has an invisible quality. Thinking about dogs, and their relationship to the animals we eat, is one way of looking askance and making something invisible visible.”

Which object-level thinking tactics, at roughly this level of specificity, do you use in your own theoretical (especially philosophical) research? Are there tactics you suspect might be helpful, which you haven’t yet used much yourself?

Scott: As far as I can remember, I’ve never set out to do “philosophical research,” so I can’t offer specific advice about that. What I have often done is research in quantum computing and complexity theory that was motivated by some philosophical issue, usually in foundations of quantum mechanics. (I’ve also written a few philosophical essays, but I don’t really count those as “research.”) Anyway, I can certainly offer advice about doing the kind of research I like to do!

(1) Any time you find yourself in a philosophical disagreement with a fellow scientist, don’t be content just to argue philosophically — even if you’re sure you can win the argument! Instead, think hard about whether you can go further, and find a concrete technical question that captures some little piece of what you’re disagreeing about. Then see if you can answer that technical question. Of course, any time you do this, you have to be prepared for the possibility that the answer will go your opponent’s way, rather than yours! But what’s nice is that you get to publish a paper even then. (One of the best ways to tell whether a given enterprise is scientific at all, rather than ideological, is by asking whether the participants will opportunistically “go to bat for the opposing side” whenever they find a novel truth on that side.) I’d estimate that up to half the papers I’ve written had their origin in my reading or overhearing some claim — for example, “Grover’s algorithm obviously can’t work for searching actual physical databases, since the speed of light is finite,” or “the quantum states arising in Shor’s algorithm are obviously completely different from anything anyone has ever seen in the lab,” or “the interactive proof results obviously make oracle separations completely irrelevant” — and getting annoyed, either because I thought the claim was false, or because I simply didn’t think it had been adequately justified. The cases where my annoyance paid off are precisely the ones where, rather than just getting mad, I managed to get technical!

(2) Often, the key to research is figuring out how to redefine failure as success. Some stories: when Alan Turing published his epochal 1936 paper on Turing machines, he did so with great disappointment: he had recently learned that Alonzo Church had independently arrived at similar results using lambda calculus, and he didn’t know whether anyone would still be interested in his alternative, machine-based approach. In the early 1970s, Leonid Levin delayed publishing about NP-completeness for several years: apparently, his “real” goal was to prove graph isomorphism was NP-complete (something we now know is almost certainly false), and in his mind, he had failed. Instead, he merely had a few “trivialities,” like the definitions of P, NP, and NP-completeness, and the proof that satisfiability was NP-complete. And Levin’s experience is far from unique: again and again in mathematical research, you’ll find yourself saying something like: “goddammit, I’ve been trying for six months to prove Y, but I can only prove the different/weaker statement X! And every time I think I can bridge the gap between X and Y, yet another difficulty rears its head!” Any time that happens to you, think hard about whether you can write a compelling paper that begins: “Y has been a longstanding open problem. In this work, we introduce a new idea: to make progress on Y by shifting attention to the more tractable X.” More broadly, experience has shown that scientists are terrible judges of which of their ideas will be interesting or important to others. Pick any scientist’s most cited paper, and there’s an excellent chance that the scientist herself, at one point, considered it a “little recreational throwaway project” that was barely worth writing up. After you’ve seen enough examples of that, you learn you should always err on the side of publishing, and let posterity sort out which of your ideas are most important. (Yet another advantage of this approach is that, the more ideas you publish, the less emotionally invested you are in any one of them, so the less crushed you are when a few turn out to be wrong or trivial or already known.)

(3) Sometimes, when you set out to prove some mathematical conjecture, your first instinct is just to throw an arsenal of theory at it. “Hey, what if I try a topological fixed-point theorem? What if I translate the problem into a group-theoretic language? If neither of those works, what if I try both at once?” Sometimes, you rise so quickly this way into a stratosphere of generality that the original problem is barely a speck on the ground. And yes, some problems can be beaten into submission using high-powered theory. But in my experience, there are two enormous risks with this approach. First, you’re liable to get lost on a wild goose chase, where you get so immersed in theory and techniques that you lose sight of your original goal. It’s as if your efforts to break into a computer network lead you to certain complicated questions about the filesystem, which in turn lead you to yet more complicated questions about the kernel… and in the meantime someone else breaks in by guessing people’s birthdays for their passwords. Second, you’re also liable to fool yourself this way into thinking you’ve solved the problem when you haven’t. When you let high-powered machinery take the place of hands-on engagement with the problem, a single mistake in applying the machinery can creep in unbelievably easily. These risks are why I’ve learned over time to work in an extremely different way. Rather than looking for “general frameworks,” I look for easy special cases and simple sanity checks, for stuff I can try out using high-school algebra or maybe a five-line computer program, just to get a feel for the problem. Even more important, when I’m getting started, I don’t think about proof techniques at all: I think instead about obstructions. That is, I ask myself, “what would the world have to be like for the conjecture to be false? what goes wrong if I try to invent a simple counterexample? does anything go wrong? it does? OK then, what obstruction keeps me from proving this conjecture in the simplest, dumbest way imaginable?” I find that, after you’ve felt out the full space of obstructions and counterexamples, and really honestly convinced yourself of why the conjecture should be true, finding the proof techniques by which to convince everyone else is often a more-or-less routine exercise.

Finally, you ask about tactics that I suspect might be helpful, but that I haven’t used much myself. One that springs to mind is to really master a tool like Mathematica, MATLAB, Maple, or Magma — that is, to learn it so well that I can code as fast as I think, and just let it take over all the routine / calculational / example-checking parts of my work. As it is, I use pretty much the same antiquated tools that I learned as an adolescent, and I rely on students whenever there’s a need for better tools. A large part of the problem is that, as a “tenured old geezer,” I no longer have the time or patience to learn new tools just for the sake of learning them: I’m always itching just to solve the problem at hand with whatever tools I know. (The same issue has kept me from learning new mathematical tools, like representation theory, even when I can clearly see that they’d benefit me.)

Luke: Thanks, Scott!

  • Bob

    If possible could you record these conversations and allow us to listen to the audio? It would be helpful for us who like to listen. – Thanks

  • Johan Nystrom

    On the whole, this was a great read, with many insights and points I agree with. I’m fully in agreement with the basic position that computer science and philosophy as fields have much to learn from each other. But I found the flat-out dismissal of postmodernism and continental philosophy unfortunate and extremely counterproductive. I wrote up my reaction to those particular comments here:

  • KLRajpal

    A quantum computer works with switches that not only exist in ON and OFF
    states but also in states that are simultaneously ‘ON and OFF’. The
    assumption that a quantum switch can be ‘ON and OFF’ at the same time is based
    on an INCORRECT concept of Linear Polarization.

    EPR Paradox made clear in a graphical manner
    Linear Polarization, Graphical Representation

As featured in:     Business Insider   Gizmodo   MSNBC   Popular Science   Popular Mechanics