Ben Goertzel on AGI as a Field

 |   |  Conversations

Ben Goertzel portrait Dr. Ben Goertzel is Chief Scientist of financial prediction firm Aidyia Holdings; Chairman of AI software company Novamente LLC and bioinformatics company Biomind LLC; Chairman of the Artificial General Intelligence Society and the OpenCog Foundation; Vice Chairman of futurist nonprofit Humanity+; Scientific Advisor of biopharma firm Genescient Corp.; Advisor to the Singularity University and MIRI; Research Professor in the Fujian Key Lab for Brain-Like Intelligent Systems at Xiamen University, China; and general Chair of the Artificial General Intelligence conference series. His research work encompasses artificial general intelligence, natural language processing, cognitive science, data mining, machine learning, computational finance, bioinformatics, virtual worlds and gaming and other areas. He has published a dozen scientific books, 100+ technical papers, and numerous journalistic articles. Before entering the software industry he served as a university faculty in several departments of mathematics, computer science and cognitive science, in the US, Australia and New Zealand. He has three children and too many pets, and in his spare time enjoys creating avant-garde fiction and music, and exploring the outdoors.


Luke Muehlhauser: Ben, you’ve been heavily involved in the formation and growth of a relatively new academic field — the field of artificial general intelligence (AGI). Since MIRI is now trying to co-create a new academic field of study — the field of Friendly AI research — we’d love to know what you’ve learned while co-creating the field of AGI research.

Could you start by telling us the brief story of the early days? Of course, AI researchers had been talking about human-level AI since the dawn of the field, and there were occasional conferences and articles and books on the subject, but the field seemed to become more cohesive and active after you and a few others pushed on things under the name “artificial general intelligence.”


Ben Goertzel: I was interested in “the subject eventually to be named AGI” since my childhood, and started doing research in the area at age 16 (which was the end of my freshman year of college, as I started university at 15). However, it soon became apparent to me that “real AI” (the term I used privately before launching the term AGI), had little to do with the typical preoccupations of the academic or industry AI fields. This is part of what pushed me to do a PhD in math rather than AI. Rather than do a PhD on the kind of narrow AI that was popular in computer science departments in the 1980s. I preferred to spend grad school learning math and reading widely and preparing to work on “real AI” via my own approaches…

I didn’t really think about trying to build a community or broad interest in “real AI” until around 2002, because until that point it just seemed hopeless. Around 2002 or so, it started to seem to me — for a variety of hard-to-pin-down reasons — that the world was poised for an attitude shift. So I started thinking a little about how to spread the word about “real AI” and its importance and feasibility more broadly.

Frankly, a main goal was to create an environment in which it would be easier for me to attract a lot of money or volunteer research collaborators for my own real-AI projects. But I was also interested in fostering work on real AI more broadly, beyond just my own approach.

My first initiative in this direction was editing a book of chapters by researchers pursuing ambitious AI projects aimed at general intelligence, human-level intelligence, and so forth. This required some digging around, to find enough people to contribute chapters — i.e. people who were both doing relevant research, and willing to contribute chapters to a book with such a focus. It also required me to find a title for the book, which is where the term “AGI” came from. My original working title was “Real AI”, but I knew that was too edgy — since after all, narrow AI is also real AI in its own sense. So I emailed a bunch of friends soliciting title suggestions and Shane Legg proposed “Artificial General Intelligence.” I felt that “AGI” lacked a certain pizazz that other terms like “Artificial Life” have, but it was the best suggestion I got so I decided to go for it. Reaction to the term was generally positive. (Later I found that a guy named Mark Gubrud had used the term before, in passing in an article focused broadly on future technologies. I met Mark Gubrud finally at the AGI-09 conference in DC.)

I didn’t really make a big push at community-building until 2005 when I started working with Bruce Klein. Bruce was a hard-core futurist whose main focus in life was human immortality. I met him when he came to visit me in Maryland to film me for a documentary. We talked a bit after that, and I convinced him that one very good way to approach immortality would be to build AGI systems that would solve the biology problems related to life extension. I asked Bruce to help me raise money for AGI R&D. After banging his head on the problem of recruiting $$ from investors for a while, he decided it would be useful to first raise the profile of the AGI pursuit in general — and this would create a context in which raising $$ for our own AGI R&D would be easier.

So Bruce and I conceived the idea of organizing an AGI conference. We put together the first AGI Workshop in Bethesda in 2006. Bruce did the logistical work; I recruited the researchers from my own social network, which was fairly small at that point. I would not have thought of trying to run conferences and build a community without Bruce’s nudging — this was more a Bruce approach than a Ben approach. I note that a few years later, Bruce played the key role in getting Singularity University off the ground. Diamandis and Kurzweil were of course the big names who made it happen; but without Bruce’s organizational legwork (as well as that of his wife at the time, Susan Fonseca), over a 6 month period prior to the first SU visioning meeting, SU would not have come together.

The AGI Workshop went well — and that was when I realized fully that there were a lot of AI researchers out there, who were secretly harboring AGI interests and ambitions and even research projects, but were not discussing these openly because of the reputation risk.

From relationships strengthened at the initial AGI Workshop, the AGI conference series was born — the first full-on AGI conference was in 2008 at the University of Memphis, and they’ve been annual ever since. The conferences have both seeded a large number of collaborations and friendships among AGI researchers who otherwise would have continued operating in an isolated way, and have had an indirect impact via conferring more legitimacy on the AGI pursuit. They have brought together industry and academic and government researchers interested in AGI, and researchers from many different countries.

Leveraging the increasing legitimacy that the conferences brought, I then did various other community-building things like publishing a co-authored paper on AGI in “AI Magazine”, the mainstream periodical of the AI field. The co-authors of the paper included folks from major firms like IBM, and some prestigious “Good Old-Fashioned AI” people. A couple other AGI-like conferences have also emerged recently, e.g. BICA and Cognitive Systems. I helped get the BICA conferences going originally, though I didn’t play a leading role. I think the AGI conferences helped create an environment in which the emergence of these other related small conferences seemed natural and acceptable.

Of course, there is no way to assess how much impact all this community-building work of mine had, because we don’t know how the AI field would have developed without my efforts. But according to my best attempt at a rational estimation, it seems my initiatives of this sort have had serious impact.

A few general lessons I would draw from this experience are:

  1. You need to do the right thing at the right time. With AGI we started our “movement” at a time when a lot of researchers wanted to do and talk about AGI, but were ashamed to admit it to their peers. So there was an upsurge of AGI interest “waiting to happen”, in a sense.
  2. It’s only obvious in hindsight that it was the right time. In real time, moving forward, to start a community one needs to take lots of entrepreneurial risks, and be tolerant of getting called foolish multiple times, including by people you respect. The risks will include various aspects, such as huge amounts of time spent, carefully built reputation risked, and personal money ventured (for instance, even for something like a conference, the deposit for the venue and catering has to come from somewhere… For the first AGI workshop, we wanted to maximize attendance by the right people so we made it free, which meant that Bruce and I — largely Bruce, as he had more funds at that time — covered the expenses from our quite limited personal funds.)
  3. Social networking and community building are a lot more useful expenditures of time than I, as a math/science/philosophy geek, intuitively realized. Of course people who are more sociable and not so geeky by nature realize the utility of these pursuits innately. I had to learn via experience, and via Bruce Klein’s expert instruction.

Luke: Did the early AGI field have much continuity with the earlier discussions of “human-level AI” (HLAI)? E.g. there were articles by Nilsson, McCarthy, Solomonoff, Laird, and others, though I’m not sure whether there were any conferences or significant edited volumes on the subject.


Ben: It was important that, in trying to move AGI forward as a field and community, we did not found our overall efforts in any of these earlier discussions.

Further, a key aspect of the AGI conferences was their utter neutrality in respect to what approach to take. This differentiates the AGI conferences from BICA or Cognitive Systems, for example. Even though I have my own opinions on what approaches are most likely to succeed, I wanted the conferences to be intellectually free-for-all, equally open to all approaches with a goal of advanced AGI…

However, specific researchers involved with the AGI movement from an early stage were certainly heavily inspired by these older discussions you mention. E.g. Marcus Hutter had a paper in the initial AGI book and has been a major force at the conferences, and has been strongly Solomonoff-inspired. Paul Rosenbloom has been a major presence at the conferences; he comes from a SOAR background and worked with the good old founders of the traditional US AI field… Selmer Bringsjord’s logic-based approach to AGI certainly harks back to McCarthy. Etc.

So, to overgeneralize a bit, I would say that these previous discussions tended to bind the AGI problem with some particular approach to AGI, whereas my preference was to more cleanly separate the goal from the approach, and create a community neutral with regard to the approach…


Luke: The Journal of Artificial General Intelligence seems to have been pretty quiet for most of its history, but the conference series seems to have been quite a success. Can you talk a bit about the challenges and apparent impacts of these two projects, and how they compare to each other?


Ben: Honestly, I have had relatively little to do with the JAGI, on a month by month basis. Loosely speaking — the conferences have been my baby; and the journal has been the baby of my friend and colleague Dr. Pei Wang. I’m on the editorial board of the journal, but my involvement so far has been restricted to help with high-level strategic decisions (like the move of the journal to the Versita platform a while ago, which Pei suggested and I was in favor of).

Since I have limited time to focus on stuff besides my own R&D work, I have personally decided to focus my attention on the conferences and not the journal. This is because I felt that the conferences would have a lot of power for informal connection building and community building, beyond the formal aspect of providing a venue for presenting papers and getting publications in conference proceedings volumes.

One thing I can say is that Pei made the explicit decision, early on, to focus on quality rather than quantity in getting papers in the journal. I think he’s succeeded at getting high quality papers.

I think the JAGI is an important initiative and has real potential to grow in the future and become an important journal. One big step we’ll need to take is to get it indexed in SCI, which is important because many academics only get “university brownie points” for publications in SCI indexed journals.


Luke: Can you say more about what kinds of special efforts you put into getting the AGI conference off the ground and growing it? Basically, what advice would you give to someone else who wants to do the same thing with another new technical discipline?


Ben: In the early stages, I made an effort to reach out one-on-one to researchers who I felt would be sympathetic to the AGI theme, and explicitly ask them to submit papers and come to the conference… This included some researchers whom I didn’t know personally at that time, but knew only via their work.

More recently, the conference keynote speeches have been useful as a tool for bringing new people into the AGI community. Folks doing relevant work who may not consider themselves AGI researchers per se, and hence wouldn’t submit papers to the conference, may still accept invitations to give keynote speeches. In some cases this may get them interested in the AGI field and community in a lasting way.

We’ve also made efforts not to let AGI get too narrowly sucked into the computer science field — by doing special sessions on neuroscience, robotics, futurology and so forth, and explicitly inviting folks from those fields to the conference, who wouldn’t otherwise think to attend.

Other things we do is to ongoingly maintain our own mailing list of AGI-interested people, built by a variety of methods, including
scouring conference websites to find folks who have presented papers related in some way to AGI. And we’ve established and maintained a relationship with AAAI, which enables us to advertise in their magazine and send postcards to their membership, thus enabling us to get a broader reach.

Anyway this is just basic organizational mechanics I suppose — not terribly specific to AGI. This kind of stuff is fairly natural for me, due to having watched my mom organize various things for decades (she’s been a leader in the social work field and is retiring this month). But I don’t think it’s anything terribly special — only the content matter (AGI) is special!

If I have put my personal stamp on this community-building process in some way, it’s probably been via the especially inclusive way it’s been conducted. I’ve had the attitude that since AGI is an early stage field (though accelerating progress means that fields can potentially advance fairly rapidly from early to advanced stages), we should be open to pretty much any sensible perspective, in a spirit of community-wide brainstorming. Of course each of us must decide which ideas to accept and take seriously for our own work, and each researcher can have more in-depth discussions with those who share more of their own approach — but a big role of a broad community like the one we’re fostering with the AGI conferences, is to expose people to ideas and perspective different from the ones they’d encounter in their ordinary work lives, yet still with conceptual (and sometimes even practical) relevance…


Luke: What advice would you specifically give to those trying to create a field of “Friendly AI research”? For example, the term itself stands out as suboptimal, though I have even stronger objections to some of the most obvious alternatives, e.g. “Safe AI” or “Good AI.”


Ben: Well, I agree with you that the term “Friendly AI” is unlikely to catch on among researchers in academia or industry, or the media for that matter. So that is one issue you face in forming an FAI community. I don’t have a great alternative term in mind, but I’ll think about it. I’ve often gravitated toward the word “Beneficial” in this context, but I realize that’s not short or spiffy-sounding.

Taking the analogy with the AGI field, one question I have is whether there’s a population of researchers who are already working on Friendly AI but not calling their work by that label or discussing it widely; or researchers or students who have a craving to work on Friendly AI but feel inhibited from doing so because of social stigma against the topic. If so, there is an analogous situation from the AGI field 10 years ago. If not, there’s no close analogy. Without such a “subterranean proto-community” already existent, guiding the formation of an above-the-ground community is a harder problem, I would think.

Of course, some sort of dramatic success in FAI research would attract people to the field. But this is a chicken-and-egg problem, as dramatic success is more likely to come if there are more people in the field. In AGI there has not yet been a dramatic success but we’ve been steadily building a community of researchers anyway. (There have been diverse, modest successes, at any rate…!)

I’m afraid I don’t have any great advice to offer beyond the obvious stuff. For instance, if you can get some famous researchers to put their reputaton behind the idea that FAI research is an important thing to be pursuing now, that would be a big help… Or convince someone to make a Hollywood movie in which some folks are making an Evil AI, which is then thwarted by a Friendly AI whose design is expertly guided by a team of FAI theorists furtively writing equations on napkins ;D … Or get someone to write a book analogous to The Singularity is Near but FAI focused — i.e. with a theme “The Singularity is Quite Possibly Near — and Whether It’s a Positive or Negative Event for Humanity LIkely Depends on How Well We Know What We’re Doing As It Approaches … and Understanding FAI Better is One Important Aspect of Knowing What We’re Doing…” … I’m fairly sure Eliezer Yudkowsky could write a great book on this theme if he wanted to, for example.

One key if FAI is to become a serious field, I think, will be to carefully and thoroughly build links between FAI researchers and people working in other related fields, like AGI, neuroscience, cognitive psychology, computer security, and so forth. If FAI is perceived as predominantly the domain of academic philosophers and abstract mathematicians, it’s not going to catch on — because after all, when is the last time that academic philosophers averted a major catastrophe, or created something of huge practical benefit? It will be key to more thoroughly link FAI to real stuff — to people actually doing things in the world and discovering new inventions or practical facts, rather than just writing philosophy papers or proving theorems about infeasible theoretical AI systems. Along these lines, workshops bringing together FHI and MIRI people don’t do much to build toward a real FAI community, I’d suppose.

Analogizing to my experience with AGI community-building, I’d say that organizing a FAI-oriented conference (with a name not involving “Friendly AI”) bringing together people from diverse disciplines, with a broad variety of perspectives, to discuss related issues freely and without any implicit assumption built into the event that the MIRI/FHI perspective is the most likely path to a solution, would be a reasonable start.

One minor comment is that, since MIRI is closely associated in the futurist community with a very particular and somewhat narrow set of perspectives on Friendly AI, if there is to be an effort to build a broader research community focused on FAI, it might be better if MIRI did this in conjunction with some other organization or organizations having reputations for greater inclusiveness.

A broader comment is: I wonder if MIRI is framing the problem too narrowly. In your KurzweilAI review of James Barrat’s recent book, you define Friendly AI research as the problem “Figure out how to make sure the first self-improving intelligent machines will be human-friendly and will stay that way.”

But there are an awful lot of assumptions built into that formulation. It presents a strong bias toward certain directions of research, which may or may not be the best ones. For instance, Francis Heylighen, David Weinbaum and their colleagues at the Global Brain Institute have interesting (and potentially valuable) things to say about AI and human extinction risk, yet would not be comfortable shoehorning their thinking into a formulation like the above.

So I think you should find good a way to formulate the core concern at the base of FAI research in a broader way, that will attract researchers with a greater variety of intellectual backgrounds and interests and theoretical orientations. The real issue you’re concerned with, according to my understanding, is something like: To increase the odds that, as AI advances beyond the human level and
allied technologies advance as well, the world continues to be a place that is reasonably acceptable (and perhaps even awesome!) according to human values. This may sound the same to you as “Figure out how to make sure the first self-improving intelligent machines will be human-friendly and will stay that way.” but it won’t sound the same to everybody…

IMO an emerging FAI community, to be effective, will have to be open to a variety of different conceptual approaches to “increasing the odds that, as AI advances beyond the human level and allied technologies advance as well, the world continues to be a place that is reasonably acceptable (and perhaps even awesome!) according to human values.” — including approaches that have nothing directly to do with self-improving machines. Ironically, I suspect that this would lead to an influx of creative thinking into the subcommunity of researchers specifically concerned with “figuring out how to make sure the first self-improving intelligent machines will be human-friendly and will stay that way.”


Luke: Thanks, Ben!