Robin Hanson is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He is known as a founder of the field of prediction markets, and was a chief architect of the Foresight Exchange, DARPA’s FutureMAP, IARPA’s DAGGRE, SciCast, and is chief scientist at Consensus Point. He started the first internal corporate prediction market at Xanadu in 1990, and invented the widely used market scoring rules. He also studies signaling and information aggregation, and is writing a book on the social implications of brain emulations. He blogs at Overcoming Bias.
Hanson received a B.S. in physics from the University of California, Irvine in 1981, an M.S. in physics and an M.A. in Conceptual Foundations of Science from the University of Chicago in 1984, and a Ph.D. in social science from Caltech in 1997. Before getting his Ph.D he researched artificial intelligence, Bayesian statistics and hypertext publishing at Lockheed, NASA and elsewhere.
Luke Muehlhauser: In an earlier blog post, I wrote about the need for what I called AGI impact experts who “develop skills related to predicting technological development, predicting AGI’s likely impact on society, and identifying which interventions are most likely to increase humanity’s chances of safely navigating the creation of AGI.”
In 2009, you gave a talk called, “How does society identify experts and when does it work?” Given the study that you’ve done and the expertise, what do you think of humanity’s prospects for developing these AGI impact experts? If they are developed, do you think society will be able to recognize who is an expert and who is not?
Robin Hanson: One set of issues has to do with existing institutions and what kinds of experts they tend to select, and what kinds of topics they tend to select. Another set of issues has to do with, if you have time and attention and interest, to what degree can you acquire expertise on any given subject, including AGI impacts, or tech forecasting more generally? A somewhat third subject which overlaps the first two is, if you did acquire such expertise, how would you convince anybody that you had it?
I think the easiest question to answer is the second one. Can you learn about this stuff?
I think some people have been brought up in a Popperian paradigm, where there’s a limited scientific method and there’s a limited range of topics it can apply to, and you turn the crank and if it can apply to those topics then you have science and you have truth, and you have something you’ve learned and otherwise everything else is opinion, equally undifferentiated opinion.
I think that’s completely wrong. That is, we have a wide range of intellectual methods out there and a wide range of social institutions that coordinate efforts.
Some of those methods work better than others, and then there are some topics on which progress is easier than others, just by the nature of the topic. But honestly, there are very few topics on which you can’t learn more if you just sit down and work at it.
Of course, that doesn’t mean you simply stare at a wall. Most topics are related to other topics on which people have learned some things. Whatever your topic is, figure out the related topics, learn about those related topics, learn as many different things as you can about what other people know about the related topic, and then start to intersect and connect them to your topic and work on it.
Just blood-sweat work can get you a lot of way in a very wide range of topics. Of course, just because you can learn about almost anything doesn’t mean you should. It doesn’t mean it’s worth the effort to society or yourself, and it doesn’t mean that there are, for any subject, easy ways to convince other people that you’ve learned something.
There are methods that you can use, where it becomes easier to convince people of things, and you might prefer to focus on those topics or methods where it is easier to convince people that you know something. A related issue is, how impressed are people about you knowing something?
Many of the existing institutions like academic institutions or media institutions that identify and credential people as experts on a variety of topics function primarily as ways to distinguish and label people as impressive.
People want to associate with, connect with, read about, and hear talks from people who are acknowledged as impressive as part of a network of experts who co-acknowlege each other as impressive. It’s called status.
Some institutions are dominated by people who are mainly trying to acquire credentials as being impressive, so they can seem impressive, be hired for impressive jobs, have punditry positions that are reserved for impressive people, be on boards of directors, etc.
Also, there are standard procedures by which you would do things so people could say, “Yes, he knows the procedures,” and “Yes, you can follow them,” and “Yes, those are damn hard procedures. Anybody who can do that must be damn impressive.”
But there are things you can learn about that it’s harder to become credentialed as impressive at.
Generically, when you just pick any topic in the world because it’s interesting or important in some more basic way, it isn’t necessarily well-suited for being an impressiveness display.
What about futurism? For various aspects of the future, if you sit down and work at it, you can make progress. It’s not very well-suited for proving that you’ve made progress, because the future takes a while to get here. Of course, when it does get here, it will be too late for you to gain much advantage from finally having been proven as impressive on the subject.
I like to compare the future to history. History is also something we are uncertain about. We have to take a lot of little clues and put them together, to draw inferences about the past. We have a lot of very concrete artifacts that we focus on. We can at least demonstrate our impressive command of all those concrete artifacts, and their details, and locations, and their patterns. We don’t have something like that for the future. We will eventually, of course.
It’s much harder to demonstrate your command of the future. You can study the future somewhat by using complicated statistical techniques that we’ve applied to other subjects. That’s possible. It still doesn’t tend to demonstrate impressiveness in quite as dramatic a way as applying statistical techniques to something where you can get more data next week that verifies what you just showed in your statistical analysis.
I also think the future is where people project a lot of hopes. They’re just less willing to be neutral about it. People are more willing to say, “Yes, sad and terrible things happened in the past, but we get it. We once believed that our founding fathers were great people, and now we can see they were shits.” I guess that’s so, but for the future their hopes are a little more hard to knock off.
You can’t prove to them the future isn’t the future they hope it is. They’ve got a lot emotion wrapped up in it. Often it’s just easier to show you’re being an impressive academic on subjects that most people don’t have a very strong emotional commitment for, because that tends to get in the way.
Luke: I have some hunches about some types of scientific training that might give people different perspectives on how well we can do at medium- to long-term tech forecasting. I wanted to get your thoughts on whether you think my hunches match up with your experience.
One hunch is that, for example, if someone is raised in a Popperian paradigm, as opposed to maybe somebody younger who was raised in a Bayesian paradigm, the Popperian will have a strong falsificationist mindset, and because you don’t get to falsify hypotheses about the future until the future comes, these kinds of people will be more skeptical of the idea that you can learn things about the future.
Or in the risk analysis community, there’s a tradition there that’s being trained in the idea that there is risk, which is something that you can attach a probability to, and then there’s uncertainty, which is something that you don’t know enough about to even attach a probability to. A lot of the things that are decades away would fall into that latter category. Whereas for me, as a Bayesian, uncertainty just collapses into risk. Because of this, maybe I’m more willing to try to think hard about the future.
Robin: Those questions are somewhat framed from the point of view of an academic, or of an academic familiar with relatively technical kinds of skills. But say you’re running a business, and you have some competitors, and you’re trying to decide where will your field go in the next few years, or what kind of products will people like, or you’re running a social organization, and you’re trying to decide how to change your strategy.
Another example: you have some history, and you’re trying to go back and figure out what were your grandfathers doing, or just almost all random questions people might ask about the world. The Popperian stuff doesn’t help at all. It’s completely useless. If you just had any sort of habit of dealing with real problems in the world, you would have developed a tolerance for expecting things not to be provable or falsifiable.
You’d also develop an expectation that there are a range of probabilities for things. You’ll be uncertain, and you’ll have to deal with that. It’s only in a rarefied academic world where it would ever be plausible to deny uncertainty, or to insist on falsification, because that’s just almost never possible or relevant for the vast majority of questions you could be trying to ask.
Luke: Getting back to the question of how someone might develop expertise in, for example, AGI impacts or, let’s just say more broadly, long term tech forecasting…
What are your thoughts on some of the key training that someone would need to undergo? It could even be mental habits, memorizing certain fields of material where we’ve done a lot of stamp collecting in the scientific sense, etc. What’s relevant, do you think, for developing this kind of expertise?
Robin: We live in a world where people spend a substantial fraction of their career learning about stuff, and then it’s only after they’ve learned about a lot of things that they become the most productive about applying the stuff they’ve learned.
We’re just in a world where people have long life spans, and they’re competing with other people with long life spans. You have to expect that if you’re going to be the best at something you will have to spend a large fraction of your life devoted to it. Sorry, no shortcuts. That’s just a message people might not want to hear but that’s the way it goes.
You’ll also have to figure out where and how much to specialize. You can’t learn 20 fields as well as the best people can know them. Sorry. You just won’t have time. You’ll have to be some very unusual person in some way to get anywhere close to that.
You will have to decide what aspects of this future you want to focus on. There are many different aspects and they don’t all come together as a package, where if you learn about one you automatically learn about the others.
In tech forecasting, one category of questions is about what technologies are feasible, in principle. To have a sense for that kind of question, to answer it, you will need to have spent a substantial fraction of your life learning about the kinds of technologies you’re talking about. You’ll also want to have spent some substantial time looking at the histories of other technologies, and how they’ve progressed over time. The typical trajectory of technology and the typical trajectory of innovation, and where it tends to come from, and how many starts tend to be false starts, et cetera.
Another category of questions is about the social implications of the technology. Batteries, say. For that, it requires a whole different set of expertise. It can be informed by knowing what a battery is, and how it works, and who might make them, and when they’ll get how good. But in order to forecast social implications of batteries you’ll have to know about societies, and how they work, and what they’re made out of, and just a lot fewer people working on it. Then you’ll just have to, probably, learn more different fields. Maybe you could both learn a lot of social science and a lot of battery tech, but that’ll take a lot of time.
One of the main questions about studying anything, including the future, is how to specialize, how to make a division of labor. As usual, like in software, the key to division of labor is interfaces. You want to carve nature at its joints so the interfaces are as simple and robust and modular as possible.
You want to ask, where are there the fewest dependencies between different questions so that you can cut the expertise lines there. You say “you guys over here you work on the answer to this question, and you guys over here you take the answer to that question, and you go do something with it.” The smaller you can make that set of answers and questions, the more modularity and independence you can have, and the more you can separate the work.
Whenever you have different teams with an interface, they’ll each have to learn a fair bit about the interface itself, in order to be productive. They’ll have to know what the interface means and where it comes from. What parts of it are uncertain? What parts of it change fast? What parts of it are people serious about, and all sorts of things on an interface, and what do they tend to lie about? That’s part of the search.
One obvious, very plausible interface is between people who predict that particular devices will be available at particular points in time for particular costs, with particular capabilities, and other people who talk about what the hell that means for the rest of society. That seems to me a relatively tight interface, compared to the other interfaces we could choose here.
Of course, within technology you could divide it up. Somebody might know lithium batteries, and they just know lithium batteries really well, and they can talk about the future of lithium batteries.
But if graphene batteries are coming down the pike, they’re not going to understand that very well. Somebody else might specialize in graphene batteries, or just specialize in knowing the range of kinds of batteries available, and what might happen to them.
Somebody else might specialize more in the distribution of technological innovation. When you draw a chart of time and capacity over time, how often does that chart align or something else, and how misleading can it be when you see a short term thing, etc.? Just a sense for a range of different kinds of histories of technologies, and what sort of variety of paths we see. You could specialize in that.
But if you’re in an area of futurism, where there aren’t very many other people doing it, you should expect things to be more like a startup where you just have to be flexible. Not because being flexible is somehow intrinsically more productive in general. It’s because it’s required when there’s a bunch of things to be done and not very many people to do them. You will, by necessity, have to acquire a wider range of skills, a wider range of approaches, consider a wider range of possibilities, accept more often restructuring, more often changing goals.
I would love it if some day serious futurism were as detailed and specialized as history. Historians have broken up the field of history into lots of different areas and times of history, and a lot of different aspects. Each person can see a previous track record of people with careers in history, and what they focused on, and the set of open questions.
Then they can go into history and they can take a particular area and know what a career in history looks like, and know what other people in that area, what kind of skills they acquired, and what it took to become impressive. If the future became that specialized then that’s what it would be like for the future too.
It just happens not to be that way, at the moment, because there’s just a lot fewer people working on it. Then you’ll just have to, probably, learn more different fields then you otherwise would, learn more different skills than you otherwise would, accept more changing of your mind about what was important, and what were the key questions, just because there’s not very many people doing serious futurism.
Luke: Robin, you used this term “serious futurism,” which happens to be the term I’ve been using for futurists who are trying to figure it out as opposed to meet the demand for morality tales about the future, or meet a demand for hype that fuels excited talk about, “Gee whiz, cool stuff from the future,” etc.
When I try to do serious futurism, most of the sources I encounter are not trying to meet the demand of figuring out what’s true about the future. I have to weed through a lot of material that’s meeting other demands, before I find anything that’s useful to my project of serious futurism.
I wonder, from your perspective, what are your thoughts on what one would do if you wanted to try to make serious futurism more common, get people excited about serious futurism, show them the value of the project, get them to invest in it so that there is more of a field, so there are more people doing all the different things that need to be done in order to figure out what the future is going to be like?
Robin: There is this world of people who think of themselves as serious futurists. I’ve had limited contact with them. Before we go into talking about them, I think it would help to just bracket this by noticing that there are many other intellectual areas which have had a similar problem, which I will phrase as widespread public interest, limited academic interest, and then an attempt to carve out a serious version of the field.
Two examples, relatively extreme examples, are sex and aliens. Both of these are subjects that people have long found fascinating to talk about. And for the most part, academic avoided both of them for a long time. Then some academics, at one point, tried to carve out an area of being serious about it.
In both of those cases, and, I think, in lots of other cases, you can see what the key problem is. As soon as you start to seriously engage the subject, if you don’t do it in a way that really clearly distinguishes how you’re doing it from how all those other people are doing it, you look like them. Then you acquire, in people’s minds, all the attributes of them which, of course, include not being serious or worthy of attention.
For example, for aliens the first big method you can use to distinguish yourself is to just search the sky for signals with huge radio telescopes. You’re not going to talk about all the other aspects of aliens. You’re just going to be hard-science radio telescope guy, searching the sky for signals. What distinguishes you from everybody else talking about aliens?
First of all, you have a radio telescope and they don’t. Second of all, you know how to do lots of complicated signal processing. Thirdly, there’s other people who do signal processing, and you’re just inheriting and applying their methods, so that’s a standard thing. It’s complicated to learn that, and you could have gone to the schools were you learned that stuff, and you can pass muster with those people when it comes to knowing how to build radio telescopes and do the signal processing. You’re just applying that to aliens. Hey. That’s just another subject.
With sex, the way they did it was they said, “Well, we’re going to put people in a room and they’ll be having sex. We’ll be watching them in all the standard ways we ever watch anybody doing anything, as a social scientist. We’re going to have the same sort of selection criteria, and methods of recording things, and recording variations, and things like that. We’re just going to do it in a very big, standardized way in order to show we’re different, we’re serious. All those other people like to talk about sex all the time. But they couldn’t be doing this. They don’t even know what the words we’re using mean. They’re not one of us. We’re not one of them.”
Of course, that means that you are, in some sense, throwing away all the data people had about sex, or at least setting it aside. You’re saying, “All these things people are claiming about sex, that’s all coming from these ordinary conversations about sex. That’s not good enough for us so we’re going to wipe the slate clean and just see what we can get with our own new data.”
In futurism, there are a bunch of futurists who are like inspirational speaker futurists, who just talk about all the cool stuff coming down the line and how society will change and that sort of thing. Then there’s the sort of academic futurists who see themselves as distinctly different from that.
Many of them focus on collecting data series about previous technologies or predictions, and then they project those data series forward and do statistical projection and prediction. They see themselves as serious academics, and one of the ways they distinguish themselves from these other futurists is that if they don’t have a data series for it then they’re not going to talk about it. It’s not in the realm of their kind of futurism.
For my work, I’m taking a risky strategy, which I don’t have any strong reason to expect to succeed, of simply having been a social science professional for a long time, taking a lot of detailed social science knowledge, that most people don’t know, and applying it to my particular scenario, using social science lingo and concepts, and basically saying, “Doesn’t this look different to you?”
Basically I’m saying, “A social scientist, when they look at this they will recognize that this is using professional, state of the art concepts and applying them to this particular subject.”
I’ve gotten, certainly, some people, reading my draft, to say, “You’re coming up with a lot more detail than I would have thought possible.” That’s sort of what I’m proud of. A lot of people look at a scenario like this and they kind of wave their hands and say, “It doesn’t look like we can figure out anything about that. That’s just too hard and complicated.” I’m going to come back and say, “Actually, it’s one of those things that is hard but not impossible, so it just takes more work.”
Luke: You mentioned this skepticism that many people have about our ability to figure out things, at all, about the far future, or figure them out in any amount of detail.
One quote that comes to mind is from Vaclav Smil, from his book Global Catastrophes and Trends, where he writes specifically about AI:
If the emergence of superior machines… is only a matter of time then all we can do is wait passively to be eliminated. If such developments are possible, we have no rational way to assess the risks. Is there a 75 percent or 0.75 percent chance of self replicating robots taking over the Earth by 2025…?
This is a very pessimistic, fatalistic view about our ability to forecast AI in particular. Actually, he says the same thing about nanotechnology. What do you think about this?
Robin: I was a physics student and then a physics grad student. In that process, I think I assimilated what was the standard worldview of physicists, at least as projected on the students. That worldview was that physicists were great, of course, and physicists could, if they chose to, go out to all those other fields, that all those other people keep mucking up and not making progress on, and they could make a lot faster progress, if progress was possible, but they don’t really want to, because that stuff isn’t nearly as interesting as physics is, so they are staying in physics and making progress there.
For many subjects, they don’t think it’s just possible to learn anything, to know anything. For physicists, the usual attitude towards social science was basically there’s no such thing as social science; there can’t be such a thing as social science.
Surely you can look at some little patterns but because you can’t experiment on people, or because it’ll be complicated, or whatever it is, it’s just not possible. Partly, that’s because they probably tried for an hour, to see what they could do, and couldn’t get very far.
It’s just way too easy to have learned a set of methods, see some hard problem, try it for an hour, or even a day or a week, not get very far, and decide it’s impossible, especially if you can make it clear that your methods definitely won’t work there.
You don’t, often, know that there are any other methods to do anything with because you’ve learned only certain methods.
It’s very hard to say that something can’t be learned. It’s much easier to say that you haven’t figured anything out or, perhaps, that a certain kind of method runs out there. It’s easier to imagine trying all the different paths you can use in a certain method, even though that’s pretty hard too.
But, to be able to say that nobody can learn anything about this, in order to say that with some authority, you have to have some understanding of all the methods out there, and what they can do, and have tried it for a while.
Academics tend to know their particular field very well and its methods, and then other fields kind of fade away and blur together. If you’re a physicist, the different between physics and chemistry is overwhelmingly important. The difference between sociology and economics seems like terminology or something, and vice-versa. If you’re an economist, the difference between economics and sociology seems overwhelming, and the difference between physics and chemistry seems like picky terminology, which just means that most people don’t know very many methods. They don’t know very many of all the different things you can do.
As one of the rare people who have spent a lot of time learning a lot of different methods, I can tell you there are a lot out there. Furthermore, I’ll stick my neck out and say most fields know a lot. Almost all academic fields where there’s lots of articles and stuff published, they know a lot.
Luke: Thanks, Robin!
Did you like this post? You may enjoy our other Conversations posts, including: