MIRI’s recent effective altruism talks

 |   |  News

EA Summit 14MIRI recently participated in the 2014 Effective Altruism Retreat and Effective Altruism Summit organized by Leverage Research. We gave four talks, participated in a panel, and held “office hours” during which people could stop by and ask us questions.

The slides for our talks are available below:

If videos of these talks become available, we’ll link them from here as well.

See also our earlier posts Friendly AI Research as Effective Altruism and Why MIRI?

Groundwork for AGI safety engineering

 |   |  Analysis

Improvements in AI are resulting in the automation of increasingly complex and creative human behaviors. Given enough time, we should expect artificial reasoners to begin to rival humans in arbitrary domains, culminating in artificial general intelligence (AGI).

A machine would qualify as an ‘AGI’, in the intended sense, if it could adapt to a very wide range of situations to consistently achieve some goal or goals. Such a machine would behave intelligently when supplied with arbitrary physical and computational environments, in the same sense that Deep Blue behaves intelligently when supplied with arbitrary chess board configurations — consistently hitting its victory condition within that narrower domain.

Since generally intelligent software could help automate the process of thinking up and testing hypotheses in the sciences, AGI would be uniquely valuable for speeding technological growth. However, this wide-ranging productivity also makes AGI a unique challenge from a safety perspective. Knowing very little about the architecture of future AGIs, we can nonetheless make a few safety-relevant generalizations:

  • Because AGIs are intelligent, they will tend to be complex, adaptive, and capable of autonomous action, and they will have a large impact where employed.
  • Because AGIs are general, their users will have incentives to employ them in an increasingly wide range of environments. This makes it hard to construct valid sandbox tests and requirements specifications.
  • Because AGIs are artificial, they will deviate from human agents, causing them to violate many of our natural intuitions and expectations about intelligent behavior.

Today’s AI software is already tough to verify and validate, thanks to its complexity and its uncertain behavior in the face of state space explosions. Menzies & Pecheur (2005) give a good overview of AI verification and validation (V&V) methods, noting that AI, and especially adaptive AI, will often yield undesired and unexpected behaviors.

An adaptive AI that acts autonomously, like a Mars rover that can’t be directly piloted from Earth, represents an additional large increase in difficulty. Autonomous safety-critical AI agents need to make irreversible decisions in dynamic environments with very low failure rates. The state of the art in safety research for autonomous systems is improving, but continues to lag behind system capabilities work. Hinchman et al. (2012) write:

As autonomous systems become more complex, the notion that systems can be fully tested and all problems will be found is becoming an impossible task. This is especially true in unmanned/autonomous systems. Full test is becoming increasingly challenging on complex system. As these systems react to more environmental [stimuli] and have larger decision spaces, testing all possible states and all ranges of the inputs to the system is becoming impossible. […] As systems become more complex, safety is really risk hazard analysis, i.e. given x amount of testing, the system appears to be safe. A fundamental change is needed. This change was highlighted in the 2010 Air Force Technology Horizon report, “It is possible to develop systems having high levels of autonomy, but it is the lack of suitable V&V methods that prevents all but relatively low levels of autonomy from being certified for use.” […]

The move towards more autonomous systems has lifted this need [for advanced verification and validation techniques and methodologies] to a national level.

AI acting autonomously in arbitrary domains, then, looks particularly difficult to verify. If AI methods continue to see rapid gains in efficiency and versatility, and especially if these gains further increase the opacity of AI algorithms to human inspection, AI safety engineering will become much more difficult in the future. In the absence of any reason to expect a development in the lead-up to AGI that would make high-assurance AGI easy (or AGI itself unlikely), we should be worried about the safety challenges of AGI, and that worry should inform our research priorities today.

Below, I’ll give reasons to doubt that AGI safety challenges are just an extension of narrow-AI safety challenges, and I’ll list some research avenues people at MIRI expect to be fruitful.

Read more »

MIRI’s August 2014 newsletter

 |   |  Newsletters

Machine Intelligence Research Institute

Dear friends,

Our summer matching challenge is underway! Every donation made to MIRI between now and August 15th, 2014 will be matched dollar-for-dollar, up to a total of $200,000!

Please donate now to help support our research!

Research Updates

News Updates

  • There are now 10 MIRIx groups around the world, in 4 countries.
  • Nick Bostrom will speak about his new book Superintelligence at UC Berkeley on September 12th.
  • Jed McCaleb has launched a new digital currency, stellars. MIRI now accepts donated stellars; our public name for receiving stellars is: miri

Other Updates

  • Nick Bostrom’s new book Superintelligence has been released in the UK, and the Kindle version is available in the US. (Hardcopy available in the US on Sep. 1st.)

As always, please don’t hesitate to let us know if you have any questions or comments.

Best,
Luke Muehlhauser
Executive Director

 

Scott Frickel on intellectual movements

 |   |  Conversations

Scott Frickel portrait Scott Frickel is Associate Professor in the Department of Sociology and Institute for the Study of Environment and Society at Brown University. His research interweaves sociological analysis with environmental studies and science and technology studies. Prior to coming to Brown he was Boeing Distinguished Professor of Environmental Sociology at Washington State University. He holds a Ph.D. from the University of Wisconsin – Madison.

His research has appeared in a wide range of disciplinary and interdisciplinary journals, including American Sociological Review; Annual Review of Sociology; Science, Technology and Human Values; and Environmental Science and Policy. He is author of Chemical Consequences: Environmental Mutagens, Scientist Activism, and the Rise of Genetic Toxicology and co-editor with Kelly Moore of The New Political Sociology of Science: Institutions, Networks, and Power.

Luke Muehlhauser: In Frickel & Gross (2005), you and your co-author present a “general theory” of scientific/intellectual movements (SIMs). I’ll summarize the theory briefly for our readers. In your terminology:

  • “SIMs have a more or less coherent program for scientific or intellectual change… toward whose knowledge core participants are consciously oriented…”
  • “The aforementioned core consists of intellectual practices that are contentious relative to normative expectations within a given… intellectual domain.”
  • “Precisely because the intellectual practices recommended by SIMs are contentious, SIMs are inherently political… because every program for intellectual change involves a desire to alter the configuration of social positions within or across intellectual fields in which power, attention, and other scarce resources are unequally distributed…”
  • “[SIMs] are constituted through organized collective action.”
  • “SIMs exist as historical entities for finite periods.”
  • “SIMs can vary in intellectual aim and scope. Some problematize previously… underdiscussed topics… Others… seek to introduce entirely new theoretical perspectives on established terrain… Some SIMs distinguish themselves through new methods… Other SIMs aim to alter the boundaries of existing… intellectual fields…”

Next, you put forward some propositions about SIMs, which seem promising given the case studies you’ve seen, but are not the result of a comprehensive analysis of SIMs — merely a starting point:

  1. “A SIM is more likely to emerge when high-status intellectual actors harbor complaints against what they understand to be the central intellectual tendencies of the day.”
  2. “SIMs are more likely to be successful when structural conditions provide access to key resources” (research funding, employment, access to rare equipment or data, intellectual prestige, etc.)
  3. “The greater a SIM’s access to [local sites at which SIM representatives can have sustained contact with potential recruits], the more likely it is to be successful.”
  4. “The success of a SIM is contingent upon the work done by movement participants to frame movement ideas in ways that resonate with the concerns of those who inhabit an intellectual field or fields.”

My first question is this: what are the most significant pieces of follow-up work on your general theory of SIMs so far?


Scott Frickel: The article on SIMs that Neil Gross and I published back in 2005 has been well-received, for the most part. Citation counts on Google Scholar have risen steadily since then and so I’m encouraged by the continued interest. It seems that the article’s central idea – that intellectual change is a broadly social phenomenon whose dynamics are in important ways similar to social movements – is resonating among sociologists and others.

The terrain that we mapped in developing our theory was intentionally quite broad, giving others lots of room to build on. And that seems to be what’s happening. Rather than challenge our basic argument or framework, scholars’ substantive engagements have tended to add elements to the theory or have sought to deepen theorization of certain existing elements. So for example, Jerry Jacobs (2013) extends the SIMs framework from specific disciplinary fields to the lines of connectivity between disciplines in seeking to better understand widespread enthusiasms for interdisciplinarity. Mikaila Arthur (2009) wants to extend the framework to better theorize the role of exogenous social movements in fomenting change within the academy. Tom Waidzunas (2013) picks up on our idea of an “intellectual opportunity structure” and, like Arthur, extends the concept’s utility to the analysis of expert knowledge production beyond the academy. In his excellent new book, Why are Professors Liberal and Why do Conservatives Care? (Harvard, 2013), Neil Gross links our theory to the political leanings of the American professoriate. His idea is that SIMs can shape the political typing of entire fields – e.g. as more or less liberal or conservative. So, rather than arguing for an extended view of SIMs, Gross wants to recognize an extended view of the impacts of SIMs, which can affect academic fields singly or in combination with other competitor SIMs. Another study that I like very much is John Parker and Ed Hackett’s (2012). analysis of how emotions shape intellectual processes in ways that drive the growth and development of SIMs. The emotional content of SIMs is something quite new for the theory, but which is consonant with lots of good work in social movement theory. Some of my own recent work builds from the SIMs project to offer a companion theory of ‘shadow mobilization’ to help explain expert interpenetration of social movements (Frickel et al. (2014)) So, in different ways, the project is chugging forward.
Read more »

Nick Bostrom to speak about Superintelligence at UC Berkeley

 |   |  News

Bostrom looking up

MIRI has arranged for Nick Bostrom to discuss his new book — Superintelligence: Paths, Dangers, Strategies — on the UC Berkeley campus on September 12th.

Bostrom is the director of the Future of Humanity Institute at Oxford University, and is a frequent collaborator with MIRI researchers (e.g. see “The Ethics of Artificial Intelligence“). He is the author of some 200 publications, and is best known for his work in five areas:  (1) existential risk; (2) the simulation argument; (3) anthropics; (4) the impacts of future technology; and (5) the implications of consequentialism for global strategy. Earlier this year he was included on Prospect magazine’s World Thinkers list, the youngest person in the top 15 from all fields and the highest-ranked analytic philosopher.

Bostrom will be introduced by UC Berkeley professor Stuart Russell, co-author of the world’s leading AI textbook. Russell’s blurb for Superintelligence reads:

Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era.

The talk will begin at 7pm at room 310 (Banatao Auditorium) in Sutardja Dai Hall (map) on the UC Berkeley campus.

If you live nearby, we hope to see you there! The room seats 150 people, on a first-come basis.

There will also be copies of Superintelligence available for purchase.

Nick Bostrom Event Image map

 

2014 Summer Matching Challenge!

 |   |  News

Nate & Nisan

Thanks to the generosity of several major donors, every donation made to MIRI between now and August 15th, 2014 will be matched dollar-for-dollar, up to a total of $200,000!

 

$0

$50K

$100K

$150K

$200K

We have reached our matching total of $200,000!

116

Total Donors

 

Now is your chance to double your impact while helping us raise up to $400,000 (with matching) to fund our research program.

Corporate matching and monthly giving pledges will count towards the total! Please email malo@intelligence.org if you intend on leveraging corporate matching (check here, to see if your employer will match your donation) or would like to pledge 6 months of monthly donations, so that we can properly account for your contributions towards the fundraiser.

(If you’re unfamiliar with our mission, see: Why MIRI?)

 

Accomplishments Since Our Winter 2013 Fundraiser Launched:

Ongoing Activities You Can Help Support

  • We’re writing an overview of the Friendly AI technical agenda (as we see it) so far.
  • We’re currently developing and testing several tutorials on different pieces of the Friendly AI technical agenda (tiling agents, modal agents, etc.).
  • We’re writing several more papers and reports.
  • We’re growing the MIRIx program, largely to grow the pool of people we can plausibly hire as full-time FAI researchers in the next couple years.
  • We’re planning, or helping to plan, multiple research workshops, including the May 2015 decision theory workshop at Cambridge University.
  • We’re finishing the editing for a book version of Eliezer’s Sequences.
  • We’re helping to fund further SPARC activity, which provides education and skill-building to elite young math talent, and introduces them to ideas like effective altruism and global catastrophic risks.
  • We’re continuing to discuss formal collaboration opportunities with UC Berkeley faculty and development staff.
  • We’re helping Nick Bostrom promote his Superintelligence book in the U.S.
  • We’re investigating opportunities for supporting Friendly AI research via federal funding sources such as the NSF.

Other projects are still being surveyed for likely cost and impact. See also our mid-2014 strategic plan.

We appreciate your support for our work! Donate now, and seize a better than usual opportunity to move our work forward. If you have questions about donating, please contact Malo Bourgon at malo@intelligence.org.

 $200,000 of total matching funds has been provided by Jaan Tallinn, Edwin Evans, and Rick Schwall.

May 2015 decision theory conference at Cambridge University

 |   |  News

MIRI, CSER, and the philosophy department at Cambridge University are co-organizing a decision theory conference titled Self-Prediction in Decision Theory and AI, to be held in the Faculty of Philosophy at the Cambridge University. The dates are May 13-19, 2015.

Huw Price and Arif Ahmed at Cambridge University are the lead organizers.

Confirmed speakers, in the order they are scheduled to speak, are:

(Updated May 17, 2015.)

MIRI’s July 2014 newsletter

 |   |  Newsletters

Machine Intelligence Research Institute

Research Updates

News Updates

  • We’ve released our mid-2014 strategic plan update.
  • There are currently six active MIRIx groups around the world. If you’re a mathematician, computer scientist, or formal philosopher, you may want to attend one of these groups, or apply for funding to run your own independently-organized MIRIx workshop!
  • Luke and Eliezer will be giving talks at the Effective Altruism Summit.
  • We are actively hiring for four positions: research fellow, science writer, office manager, and director of development. Salaries + benefits are competitive, visa assistance available if needed.

Other Updates

As always, please don’t hesitate to let us know if you have any questions or comments.

Best,
Luke Muehlhauser
Executive Director

You’re receiving this because you subscribed to the MIRI newsletter.

unsubscribe from this list | update subscription preferences