Scott Frickel is Associate Professor in the Department of Sociology and Institute for the Study of Environment and Society at Brown University. His research interweaves sociological analysis with environmental studies and science and technology studies. Prior to coming to Brown he was Boeing Distinguished Professor of Environmental Sociology at Washington State University. He holds a Ph.D. from the University of Wisconsin – Madison.
His research has appeared in a wide range of disciplinary and interdisciplinary journals, including American Sociological Review; Annual Review of Sociology; Science, Technology and Human Values; and Environmental Science and Policy. He is author of Chemical Consequences: Environmental Mutagens, Scientist Activism, and the Rise of Genetic Toxicology and co-editor with Kelly Moore of The New Political Sociology of Science: Institutions, Networks, and Power.
Luke Muehlhauser: In Frickel & Gross (2005), you and your co-author present a “general theory” of scientific/intellectual movements (SIMs). I’ll summarize the theory briefly for our readers. In your terminology:
- “SIMs have a more or less coherent program for scientific or intellectual change… toward whose knowledge core participants are consciously oriented…”
- “The aforementioned core consists of intellectual practices that are contentious relative to normative expectations within a given… intellectual domain.”
- “Precisely because the intellectual practices recommended by SIMs are contentious, SIMs are inherently political… because every program for intellectual change involves a desire to alter the configuration of social positions within or across intellectual fields in which power, attention, and other scarce resources are unequally distributed…”
- “[SIMs] are constituted through organized collective action.”
- “SIMs exist as historical entities for finite periods.”
- “SIMs can vary in intellectual aim and scope. Some problematize previously… underdiscussed topics… Others… seek to introduce entirely new theoretical perspectives on established terrain… Some SIMs distinguish themselves through new methods… Other SIMs aim to alter the boundaries of existing… intellectual fields…”
Next, you put forward some propositions about SIMs, which seem promising given the case studies you’ve seen, but are not the result of a comprehensive analysis of SIMs — merely a starting point:
- “A SIM is more likely to emerge when high-status intellectual actors harbor complaints against what they understand to be the central intellectual tendencies of the day.”
- “SIMs are more likely to be successful when structural conditions provide access to key resources” (research funding, employment, access to rare equipment or data, intellectual prestige, etc.)
- “The greater a SIM’s access to [local sites at which SIM representatives can have sustained contact with potential recruits], the more likely it is to be successful.”
- “The success of a SIM is contingent upon the work done by movement participants to frame movement ideas in ways that resonate with the concerns of those who inhabit an intellectual field or fields.”
My first question is this: what are the most significant pieces of follow-up work on your general theory of SIMs so far?
Scott Frickel: The article on SIMs that Neil Gross and I published back in 2005 has been well-received, for the most part. Citation counts on Google Scholar have risen steadily since then and so I’m encouraged by the continued interest. It seems that the article’s central idea – that intellectual change is a broadly social phenomenon whose dynamics are in important ways similar to social movements – is resonating among sociologists and others.
The terrain that we mapped in developing our theory was intentionally quite broad, giving others lots of room to build on. And that seems to be what’s happening. Rather than challenge our basic argument or framework, scholars’ substantive engagements have tended to add elements to the theory or have sought to deepen theorization of certain existing elements. So for example, Jerry Jacobs (2013) extends the SIMs framework from specific disciplinary fields to the lines of connectivity between disciplines in seeking to better understand widespread enthusiasms for interdisciplinarity. Mikaila Arthur (2009) wants to extend the framework to better theorize the role of exogenous social movements in fomenting change within the academy. Tom Waidzunas (2013) picks up on our idea of an “intellectual opportunity structure” and, like Arthur, extends the concept’s utility to the analysis of expert knowledge production beyond the academy. In his excellent new book, Why are Professors Liberal and Why do Conservatives Care? (Harvard, 2013), Neil Gross links our theory to the political leanings of the American professoriate. His idea is that SIMs can shape the political typing of entire fields – e.g. as more or less liberal or conservative. So, rather than arguing for an extended view of SIMs, Gross wants to recognize an extended view of the impacts of SIMs, which can affect academic fields singly or in combination with other competitor SIMs. Another study that I like very much is John Parker and Ed Hackett’s (2012). analysis of how emotions shape intellectual processes in ways that drive the growth and development of SIMs. The emotional content of SIMs is something quite new for the theory, but which is consonant with lots of good work in social movement theory. Some of my own recent work builds from the SIMs project to offer a companion theory of ‘shadow mobilization’ to help explain expert interpenetration of social movements (Frickel et al. (2014)) So, in different ways, the project is chugging forward.
Read more »
Bostrom is the director of the Future of Humanity Institute at Oxford University, and is a frequent collaborator with MIRI researchers (e.g. see “The Ethics of Artificial Intelligence“). He is the author of some 200 publications, and is best known for his work in five areas: (1) existential risk; (2) the simulation argument; (3) anthropics; (4) the impacts of future technology; and (5) the implications of consequentialism for global strategy. Earlier this year he was included on Prospect magazine’s World Thinkers list, the youngest person in the top 15 from all fields and the highest-ranked analytic philosopher.
Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era.
The talk will begin at 7pm at room 310 (Banatao Auditorium) in Sutardja Dai Hall (map) on the UC Berkeley campus.
If you live nearby, we hope to see you there! The room seats 150 people, on a first-come basis.
There will also be copies of Superintelligence available for purchase.
Thanks to the generosity of several major donors,† every donation made to MIRI between now and August 15th, 2014 will be matched dollar-for-dollar, up to a total of $200,000!
We have reached our matching total of $200,000!
Now is your chance to double your impact while helping us raise up to $400,000 (with matching) to fund our research program.
Corporate matching and monthly giving pledges will count towards the total! Please email firstname.lastname@example.org if you intend on leveraging corporate matching (check here, to see if your employer will match your donation) or would like to pledge 6 months of monthly donations, so that we can properly account for your contributions towards the fundraiser.
(If you’re unfamiliar with our mission, see: Why MIRI?)
Accomplishments Since Our Winter 2013 Fundraiser Launched:
- Hired 2 new Friendly AI researchers, Benja Fallenstein & Nate Soares. Since March, they’ve authored or co-authored 4 papers/reports, with several others in the works. Right now they’re traveling, to present papers at the Vienna Summer of Logic, AAAI-14, and AGI-14.
- 5 new papers & book chapters: “Why We Need Friendly AI,” “The errors, insights, and lessons of famous AI predictions,” “Problems of self-reference…,” “Program equilibrium…,” and “The ethics of artificial intelligence.”
- 11 new technical reports: 7 reports from the December 2013 workshop, “Botworld,” “Loudness…,” “Distributions allowing tiling…,” and “Non-omniscience…”
- New book: Smarter Than Us, published both as an e-book and a paperback.
- Held one MIRI workshop and launched the MIRIx program, which currently supports 8 independently-organized Friendly AI discussion/research groups around the world.
- New analyses: Robby’s posts on naturalized induction, Luke’s list of 70+ studies which could improve our picture of superintelligence strategy, “Exponential and non-exponential trends in information technology,” “The world’s distribution of computation,” “How big is the field of artificial intelligence?,” “Robust cooperation: A case study in Friendly AI research,” “Is my view contrarian?,” and “Can we really upload Johnny Depp’s brain?”
- Won $60,000+ in matching and prizes from sources that wouldn’t have otherwise given to MIRI, via the Silicon Valley Gives fundraiser. (Thanks again, all you dedicated donors!)
- 49 new expert interviews, including interviews with Scott Aaronson (MIT), Max Tegmark (MIT), Kathleen Fisher (DARPA), Suresh Jagannathan (DARPA), André Platzer (CMU), Anil Nerode (Cornell), John Baez (UC Riverside), Jonathan Millen (MITRE), and Roger Schell.
- 4 transcribed conversations about MIRI strategy: 1, 2, 3, 4.
- Published a thorough “2013 in review.”
Ongoing Activities You Can Help Support
- We’re writing an overview of the Friendly AI technical agenda (as we see it) so far.
- We’re currently developing and testing several tutorials on different pieces of the Friendly AI technical agenda (tiling agents, modal agents, etc.).
- We’re writing several more papers and reports.
- We’re growing the MIRIx program, largely to grow the pool of people we can plausibly hire as full-time FAI researchers in the next couple years.
- We’re planning, or helping to plan, multiple research workshops, including the May 2015 decision theory workshop at Cambridge University.
- We’re finishing the editing for a book version of Eliezer’s Sequences.
- We’re helping to fund further SPARC activity, which provides education and skill-building to elite young math talent, and introduces them to ideas like effective altruism and global catastrophic risks.
- We’re continuing to discuss formal collaboration opportunities with UC Berkeley faculty and development staff.
- We’re helping Nick Bostrom promote his Superintelligence book in the U.S.
- We’re investigating opportunities for supporting Friendly AI research via federal funding sources such as the NSF.
Other projects are still being surveyed for likely cost and impact. See also our mid-2014 strategic plan.
We appreciate your support for our work! Donate now, and seize a better than usual opportunity to move our work forward. If you have questions about donating, please contact Malo Bourgon at (510) 292-8776 or email@example.com.
† $200,000 of total matching funds has been provided by Jaan Tallinn, Edwin Evans, and Rick Schwall.
Louie Helm has left MIRI to pursue another opportunity. Louie remains a valued MIRI advisor, and we wish him the best in his new venture.
Louie played a pivotal role in MIRI’s recent transformation. Indeed, I most naturally think of the past 2.5 years as the “Luke & Louie era” in MIRI’s history. So I’d like to share with MIRI’s supporters some of what Louie contributed to MIRI’s recent transformation and growth.
Louie was a visiting fellow with SIAI (before it was called MIRI) in 2010, and then he returned to Asia but continued to serve as SIAI’s unpaid volunteer coordinator. Louie noticed my articles on Less Wrong and asked me in January 2011 to help him finish his Optimal Employment post. He then persuaded me to quit my job in Los Angeles and meet him in Berkeley to improve SIAI’s operations (as an intern).
Upon returning to Berkeley, Louie set up SIAI’s donor database, helped me write SIAI’s first strategic plan, led the effort for that summer’s fundraising drive, and worked with me on a long list of improvements to organizational efficiency. By the end of the year we had both been given executive roles at SIAI.
Later, Louie took the lead in SIAI’s branding transition to MIRI (e.g. domain names, website design, organization name market testing), and in finding and securing for MIRI a new office in downtown Berkeley. He has also networked for MIRI at dozens of events, helped organize and sell tickets for three Singularity Summits, won and managed MIRI’s Adwords grant (with Kevin Fisher), wrote our Recommended Courses page, created several new streams of revenue (affinity card, affiliate links, etc.), secured for MIRI several professional services and needed insurance contracts, and much more.
Louie’s accomplishments at MIRI are too numerous to list here. So, I’d like to conclude by thanking Louie for something perhaps less tangible but still very important: his business experience and advice. I did not have prior management experience when I was offered a leadership role at MIRI, and much of the credit for the last 2.5 years of organizational improvement and growth at MIRI must go to Louie’s business intuitions, and his willingness to help hone my own business intuitions. Fortunately, Louie’s advice will continue to inform MIRI’s trajectory even as he pursues other opportunities.
MIRI, CSER, and the philosophy department at Cambridge University are co-organizing a decision theory workshop titled Self-Prediction in Decision Theory and AI, to be held in the Faculty of Philosophy at the Cambridge University. The tentative dates are May 13-19, 2015.
Speakers confirmed so far include:
- Arif Ahmed (Cambridge)
- Stuart Armstrong (Oxford)
- Rachael Briggs (ANU)
- Daniel Dewey (Oxford)
- Kenny Easwaran (Texas A&M)
- Benja Fallenstein (MIRI)
- Preston Greene (NTU)
- Alan Hájek (ANU)
- Joseph Halpern (Cornell)
- James Joyce (U Michigan)
- Huw Price (Cambridge)
- Wlodek Rabinowicz (Lund)
- Stuart Russell (Berkeley)
- Vladimir Slepnev (Google)
- Eliezer Yudkowsky (MIRI)
UC Berkeley student and MIRI research associate Paul Christiano has released a new report: “Non-omniscience, probabilistic inference, and metamathematics.”
We suggest a tractable algorithm for assigning probabilities to sentences of first-order logic and updating those probabilities on the basis of observations. The core technical difficulty is relaxing the constraints of logical consistency in a way that is appropriate for bounded reasoners, without sacrificing the ability to make useful logical inferences or update correctly on evidence.
Using this framework, we discuss formalizations of some issues in the epistemology of mathematics. We show how mathematical theories can be understood as latent structure constraining physical observations, and consequently how realistic observations can provide evidence about abstract mathematical facts. We also discuss the relevance of these ideas to general intelligence.
What is the relation between this new report and Christiano et al.’s earlier “Definability of truth in probabilistic logic” report, discussed by John Baez here? In this new report, Paul aims to take a broader look at the interaction between probabilistic reasoning and epistemological issues, from an algorithmic perspective, before continuing to think about reflection and truth in particular.