MIRI’s September Newsletter

 |   |  Newsletters

 

 

Machine Intelligence Research Institute

Thanks to the generosity of 100+ donors, we successfully completed our 2014 summer matching challenge on August 15th, raising more than $400,000 total for our research program. Our deepest thanks to all our supporters!

News updates

  • MIRI is running an online reading group for Nick Bostrom’s Superintelligence. Join the discussion here!
  • MIRI participated in the 2014 Effective Altruism Summit. Slides from our talks are available here.

Other updates

As always, please don’t hesitate to let us know if you have any questions or comments.

Best,
Luke Muehlhauser
Executive Director

 

 

Superintelligence reading group

 |   |  News

Nick Bostrom’s eagerly awaited Superintelligence comes out in the US this week. To help you get the most out of it, MIRI is running an online reading group where you can join with others to ask questions, discuss ideas, and probe the arguments more deeply.

The reading group will “meet” on a weekly post on the LessWrong discussion forum. For each ‘meeting’, we will read about half a chapter of Superintelligence, then come together virtually to discuss. I’ll summarize the chapter, and offer a few relevant notes, thoughts, and ideas for further investigation. (My notes will also be used as the source material for the final reading guide for the book.)

Discussion will take place in the comments. I’ll offer some questions, and invite you to bring your own, as well as thoughts, criticisms and suggestions for interesting related material. Your contributions to the reading group might also (with permission) be used in our final reading guide for the book.

We welcome both newcomers and veterans on the topic. Content will aim to be intelligible to a wide audience, and topics will range from novice to expert level. All levels of time commitment are welcome. We especially encourage AI researchers and practitioners to participate. Just use a pseudonym if you don’t want your questions and comments publicly linked to your identity.

We will follow this preliminary reading guide, produced by MIRI, reading one section per week.

If you have already read the book, don’t worry! To the extent you remember what it says, your superior expertise will only be a bonus. To the extent you don’t remember what it says, now is a good time for a review! If you don’t have time to read the book, but still want to participate, you are also welcome to join in. I will provide summaries, and many things will have page numbers, in case you want to skip to the relevant parts.

If this sounds good to you, first grab a copy of Superintelligence. You may also want to sign up here to be emailed when the discussion begins each week. The first virtual meeting (forum post) will go live at 6pm Pacific on Monday, September 15th. Following meetings will start at 6pm every Monday, so if you’d like to coordinate for quick fire discussion with others, put that into your calendar. If you prefer flexibility, come by any time! And remember that if there are any people you would especially enjoy discussing Superintelligence with, link them to this post!

Topics for the first week will include impressive displays of artificial intelligence, why computers play board games so well, and what a reasonable person should infer from the agricultural and industrial revolutions.

 

New paper: “Exploratory engineering in artificial intelligence”

 |   |  Papers

Exploratory engineeringLuke Muehlhauser and Bill Hibbard have a new paper (PDF) in the September 2014 issue of Communications of the ACM, the world’s most-read peer-reviewed computer science publication. The title is “Exploratory Engineering in Artificial Intelligence.”

Excerpt:

We regularly see examples of new artificial intelligence (AI) capabilities… No doubt such automation will produce tremendous economic value, but will we be able to trust these advanced autonomous systems with so much capability?

Today, AI safety engineering mostly consists in a combination of formal methods and testing. Though powerful, these methods lack foresight: they can be applied only to particular extant systems. We describe a third, complementary approach that aims to predict the (potentially hazardous) properties and behaviors of broad classes of future AI agents, based on their mathematical structure (for example, reinforcement learning)… We call this approach “exploratory engineering in AI.”

In this Viewpoint, we focus on theoretical AI models inspired by Marcus Hutter’s AIXI, an optimal agent model for maximizing an environmental reward signal…

Autonomous intelligent machines have the potential for large impacts on our civilization. Exploratory engineering gives us the capacity to have some foresight into what these impacts might be, by analyzing the properties of agent designs based on their mathematical form. Exploratory engineering also enables us to identify lines of research — such as the study of Dewey’s value-learning agents — that may be important for anticipating and avoiding unwanted AI behaviors. This kind of foresight will be increasingly valuable as machine intelligence comes to play an ever-larger role in our world.

2014 Summer Matching Challenge Completed!

 |   |  News

Thanks to the generosity of 100+ donors, today we successfully completed our 2014 summer matching challenge, raising more than $400,000 total for our research program.

Our deepest thanks to all our supporters!

Also, Jed McCaleb’s new crypto-currency Stellar was launched during MIRI’s fundraiser, and we decided to accept donated stellars. These donations weren’t counted toward the matching drive, and their market value is unstable at this early stage, but as of today we’ve received 850,000+ donated stellars from 3000+ different stellar accounts. Our thanks to everyone who donated in stellar!

MIRI’s recent effective altruism talks

 |   |  News

EA Summit 14MIRI recently participated in the 2014 Effective Altruism Retreat and Effective Altruism Summit organized by Leverage Research. We gave four talks, participated in a panel, and held “office hours” during which people could stop by and ask us questions.

The slides for our talks are available below:

If videos of these talks become available, we’ll link them from here as well.

See also our earlier posts Friendly AI Research as Effective Altruism and Why MIRI?

Groundwork for AGI safety engineering

 |   |  Analysis

Improvements in AI are resulting in the automation of increasingly complex and creative human behaviors. Given enough time, we should expect artificial reasoners to begin to rival humans in arbitrary domains, culminating in artificial general intelligence (AGI).

A machine would qualify as an ‘AGI’, in the intended sense, if it could adapt to a very wide range of situations to consistently achieve some goal or goals. Such a machine would behave intelligently when supplied with arbitrary physical and computational environments, in the same sense that Deep Blue behaves intelligently when supplied with arbitrary chess board configurations — consistently hitting its victory condition within that narrower domain.

Since generally intelligent software could help automate the process of thinking up and testing hypotheses in the sciences, AGI would be uniquely valuable for speeding technological growth. However, this wide-ranging productivity also makes AGI a unique challenge from a safety perspective. Knowing very little about the architecture of future AGIs, we can nonetheless make a few safety-relevant generalizations:

  • Because AGIs are intelligent, they will tend to be complex, adaptive, and capable of autonomous action, and they will have a large impact where employed.
  • Because AGIs are general, their users will have incentives to employ them in an increasingly wide range of environments. This makes it hard to construct valid sandbox tests and requirements specifications.
  • Because AGIs are artificial, they will deviate from human agents, causing them to violate many of our natural intuitions and expectations about intelligent behavior.

Today’s AI software is already tough to verify and validate, thanks to its complexity and its uncertain behavior in the face of state space explosions. Menzies & Pecheur (2005) give a good overview of AI verification and validation (V&V) methods, noting that AI, and especially adaptive AI, will often yield undesired and unexpected behaviors.

An adaptive AI that acts autonomously, like a Mars rover that can’t be directly piloted from Earth, represents an additional large increase in difficulty. Autonomous safety-critical AI agents need to make irreversible decisions in dynamic environments with very low failure rates. The state of the art in safety research for autonomous systems is improving, but continues to lag behind system capabilities work. Hinchman et al. (2012) write:

As autonomous systems become more complex, the notion that systems can be fully tested and all problems will be found is becoming an impossible task. This is especially true in unmanned/autonomous systems. Full test is becoming increasingly challenging on complex system. As these systems react to more environmental [stimuli] and have larger decision spaces, testing all possible states and all ranges of the inputs to the system is becoming impossible. […] As systems become more complex, safety is really risk hazard analysis, i.e. given x amount of testing, the system appears to be safe. A fundamental change is needed. This change was highlighted in the 2010 Air Force Technology Horizon report, “It is possible to develop systems having high levels of autonomy, but it is the lack of suitable V&V methods that prevents all but relatively low levels of autonomy from being certified for use.” […]

The move towards more autonomous systems has lifted this need [for advanced verification and validation techniques and methodologies] to a national level.

AI acting autonomously in arbitrary domains, then, looks particularly difficult to verify. If AI methods continue to see rapid gains in efficiency and versatility, and especially if these gains further increase the opacity of AI algorithms to human inspection, AI safety engineering will become much more difficult in the future. In the absence of any reason to expect a development in the lead-up to AGI that would make high-assurance AGI easy (or AGI itself unlikely), we should be worried about the safety challenges of AGI, and that worry should inform our research priorities today.

Below, I’ll give reasons to doubt that AGI safety challenges are just an extension of narrow-AI safety challenges, and I’ll list some research avenues people at MIRI expect to be fruitful.

Read more »

MIRI’s August 2014 newsletter

 |   |  Newsletters

Machine Intelligence Research Institute

Dear friends,

Our summer matching challenge is underway! Every donation made to MIRI between now and August 15th, 2014 will be matched dollar-for-dollar, up to a total of $200,000!

Please donate now to help support our research!

Research Updates

News Updates

  • There are now 10 MIRIx groups around the world, in 4 countries.
  • Nick Bostrom will speak about his new book Superintelligence at UC Berkeley on September 12th.
  • Jed McCaleb has launched a new digital currency, stellars. MIRI now accepts donated stellars; our public name for receiving stellars is: miri

Other Updates

  • Nick Bostrom’s new book Superintelligence has been released in the UK, and the Kindle version is available in the US. (Hardcopy available in the US on Sep. 1st.)

As always, please don’t hesitate to let us know if you have any questions or comments.

Best,
Luke Muehlhauser
Executive Director

 

Scott Frickel on intellectual movements

 |   |  Conversations

Scott Frickel portrait Scott Frickel is Associate Professor in the Department of Sociology and Institute for the Study of Environment and Society at Brown University. His research interweaves sociological analysis with environmental studies and science and technology studies. Prior to coming to Brown he was Boeing Distinguished Professor of Environmental Sociology at Washington State University. He holds a Ph.D. from the University of Wisconsin – Madison.

His research has appeared in a wide range of disciplinary and interdisciplinary journals, including American Sociological Review; Annual Review of Sociology; Science, Technology and Human Values; and Environmental Science and Policy. He is author of Chemical Consequences: Environmental Mutagens, Scientist Activism, and the Rise of Genetic Toxicology and co-editor with Kelly Moore of The New Political Sociology of Science: Institutions, Networks, and Power.

Luke Muehlhauser: In Frickel & Gross (2005), you and your co-author present a “general theory” of scientific/intellectual movements (SIMs). I’ll summarize the theory briefly for our readers. In your terminology:

  • “SIMs have a more or less coherent program for scientific or intellectual change… toward whose knowledge core participants are consciously oriented…”
  • “The aforementioned core consists of intellectual practices that are contentious relative to normative expectations within a given… intellectual domain.”
  • “Precisely because the intellectual practices recommended by SIMs are contentious, SIMs are inherently political… because every program for intellectual change involves a desire to alter the configuration of social positions within or across intellectual fields in which power, attention, and other scarce resources are unequally distributed…”
  • “[SIMs] are constituted through organized collective action.”
  • “SIMs exist as historical entities for finite periods.”
  • “SIMs can vary in intellectual aim and scope. Some problematize previously… underdiscussed topics… Others… seek to introduce entirely new theoretical perspectives on established terrain… Some SIMs distinguish themselves through new methods… Other SIMs aim to alter the boundaries of existing… intellectual fields…”

Next, you put forward some propositions about SIMs, which seem promising given the case studies you’ve seen, but are not the result of a comprehensive analysis of SIMs — merely a starting point:

  1. “A SIM is more likely to emerge when high-status intellectual actors harbor complaints against what they understand to be the central intellectual tendencies of the day.”
  2. “SIMs are more likely to be successful when structural conditions provide access to key resources” (research funding, employment, access to rare equipment or data, intellectual prestige, etc.)
  3. “The greater a SIM’s access to [local sites at which SIM representatives can have sustained contact with potential recruits], the more likely it is to be successful.”
  4. “The success of a SIM is contingent upon the work done by movement participants to frame movement ideas in ways that resonate with the concerns of those who inhabit an intellectual field or fields.”

My first question is this: what are the most significant pieces of follow-up work on your general theory of SIMs so far?


Scott Frickel: The article on SIMs that Neil Gross and I published back in 2005 has been well-received, for the most part. Citation counts on Google Scholar have risen steadily since then and so I’m encouraged by the continued interest. It seems that the article’s central idea – that intellectual change is a broadly social phenomenon whose dynamics are in important ways similar to social movements – is resonating among sociologists and others.

The terrain that we mapped in developing our theory was intentionally quite broad, giving others lots of room to build on. And that seems to be what’s happening. Rather than challenge our basic argument or framework, scholars’ substantive engagements have tended to add elements to the theory or have sought to deepen theorization of certain existing elements. So for example, Jerry Jacobs (2013) extends the SIMs framework from specific disciplinary fields to the lines of connectivity between disciplines in seeking to better understand widespread enthusiasms for interdisciplinarity. Mikaila Arthur (2009) wants to extend the framework to better theorize the role of exogenous social movements in fomenting change within the academy. Tom Waidzunas (2013) picks up on our idea of an “intellectual opportunity structure” and, like Arthur, extends the concept’s utility to the analysis of expert knowledge production beyond the academy. In his excellent new book, Why are Professors Liberal and Why do Conservatives Care? (Harvard, 2013), Neil Gross links our theory to the political leanings of the American professoriate. His idea is that SIMs can shape the political typing of entire fields – e.g. as more or less liberal or conservative. So, rather than arguing for an extended view of SIMs, Gross wants to recognize an extended view of the impacts of SIMs, which can affect academic fields singly or in combination with other competitor SIMs. Another study that I like very much is John Parker and Ed Hackett’s (2012). analysis of how emotions shape intellectual processes in ways that drive the growth and development of SIMs. The emotional content of SIMs is something quite new for the theory, but which is consonant with lots of good work in social movement theory. Some of my own recent work builds from the SIMs project to offer a companion theory of ‘shadow mobilization’ to help explain expert interpenetration of social movements (Frickel et al. (2014)) So, in different ways, the project is chugging forward.
Read more »