“Algorithmic Progress in Six Domains” Released

 |   |  Papers

algorithmic progressToday we released a new technical report by visiting researcher Katja Grace called “Algorithmic Progress in Six Domains.” The report summarizes data on algorithmic progress – that is, better performance per fixed amount of computing hardware – in six domains:

  • SAT solvers,
  • Chess and Go programs,
  • Physics simulations,
  • Factoring,
  • Mixed integer programming, and
  • Some forms of machine learning.

Our purpose for collecting these data was to shed light on the question of intelligence explosion microeconomics, though we suspect the report will be of broad interest within the software industry and computer science academia.

One finding from the report was previously discussed by Robin Hanson here. (Robin saw an early draft on the intelligence explosion microeconomics mailing list.)

The preferred page for discussing the report in general is here.

Summary:

In recent boolean satisfiability (SAT) competitions, SAT solver performance has increased 5–15% per year, depending on the type of problem. However, these gains have been driven by widely varying improvements on particular problems. Retrospective surveys of SAT performance (on problems chosen after the fact) display significantly faster progress.

Chess programs have improved by around 50 Elo points per year over the last four decades. Estimates for the significance of hardware improvements are very noisy, but are consistent with hardware improvements being responsible for approximately half of progress. Progress has been smooth on the scale of years since the 1960s, except for the past five. Go programs have improved about one stone per year for the last three decades. Hardware doublings produce diminishing Elo gains, on a scale consistent with accounting for around half of progress.

Improvements in a variety of physics simulations (selected after the fact to exhibit performance increases due to software) appear to be roughly half due to hardware progress.

The largest number factored to date has grown by about 5.5 digits per year for the last two decades; computing power increased 10,000-fold over this period, and it is unclear how much of the increase is due to hardware progress.

Some mixed integer programming (MIP) algorithms, run on modern MIP instances with modern hardware, have roughly doubled in speed each year. MIP is an important optimization problem, but one which has been called to attention after the fact due to performance improvements. Other optimization problems have had more inconsistent (and harder to determine) improvements.

Various forms of machine learning have had steeply diminishing progress in percentage accuracy over recent decades. Some vision tasks have recently seen faster progress.

AI Risk and the Security Mindset

 |   |  Analysis

In 2008, security expert Bruce Schneier wrote about the security mindset:

Security requires a particular mindset. Security professionals… see the world differently. They can’t walk into a store without noticing how they might shoplift. They can’t use a computer without wondering about the security vulnerabilities. They can’t vote without trying to figure out how to vote twice…

SmartWater is a liquid with a unique identifier linked to a particular owner. “The idea is for me to paint this stuff on my valuables as proof of ownership,” I wrote when I first learned about the idea. “I think a better idea would be for me to paint it on your valuables, and then call the police.”

…This kind of thinking is not natural for most people. It’s not natural for engineers. Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail. It involves thinking like an attacker, an adversary or a criminal. You don’t have to exploit the vulnerabilities you find, but if you don’t see the world that way, you’ll never notice most security problems.

with folded handsA recurring problem in much of the literature on “machine ethics” or “AGI ethics” or “AGI safety” is that researchers and commenters often appear to be asking the question “How will this solution work?” rather than “How will this solution fail?”

Here’s an example of the security mindset at work when thinking about AI risk. When presented with the suggestion that an AI would be safe if it “merely” (1) was very good at prediction and (2) gave humans text-only answers that it predicted would result in each stated goal being achieved, Viliam Bur pointed out a possible failure mode (which was later simplified):

Example question: “How should I get rid of my disease most cheaply?” Example answer: “You won’t. You will die soon, unavoidably. This report is 99.999% reliable”. Predicted human reaction: Decides to kill self and get it over with. Success rate: 100%, the disease is gone. Costs of cure: zero. Mission completed.

This security mindset is one of the traits we look for in researchers we might hire or collaborate with. Such researchers show a tendency to ask “How will this fail?” and “Why might this formalism not quite capture what we really care about?” and “Can I find a way to break this result?”

That said, there’s no sense in being infinitely skeptical of results that may help with AI security, safety, reliability, or “friendliness.” As always, we must think with probabilities.

Also see:

Index of Transcripts

 |   |  News

Volunteers at MIRI Volunteers and elsewhere have helpfully transcribed several audio/video recordings related to MIRI’s work. This post is a continuously updated index of those transcripts.

All transcripts of Singularity Summit talks are available here.

Other available transcripts include:

MIRI’s December 2013 Workshop

 |   |  News

013

From December 14-20, MIRI will host another Workshop on Logic, Probability, and Reflection. This workshop will focus on the Löbian obstacle, probabilistic logic, and the intersection of logic and probability more generally.

Participants confirmed so far include:

If you have a strong mathematics background and might like to attend this workshop, it’s not too late to apply! And even if this workshop doesn’t fit your schedule, please do apply, so that we can notify you of other workshops (long before they are announced publicly).

Nick Beckstead on the Importance of the Far Future

 |   |  Conversations

Nick Beckstead recently finished a Ph.D in philosophy at Rutgers University, where he focused on practical and theoretical ethical issues involving future generations. He is particularly interested in the practical implications of taking full account of how actions taken today affect people who may live in the very distant future. His research focuses on how big picture questions in normative philosophy (especially population ethics and decision theory) and various big picture empirical questions (especially about existential risk, moral and economic progress, and the future of technology) feed into this issue.

Apart from his academic work, Nick has been closely involved with the effective altruism movement. He has been the director of research for Giving What We Can, he has worked as a summer research analyst at GiveWell, and he is currently on the board of trustees for the Centre for Effective Altruism, and he recently became a research fellow at the Future of Humanity Institute.

Read more »

Roman Yampolskiy on AI Safety Engineering

 |   |  Conversations

Roman V. Yampolskiy holds a PhD degree from the Department of Computer Science and Engineering at the University at Buffalo. There he was a recipient of a four year NSF IGERT fellowship. Before beginning his doctoral studies, Dr. Yampolskiy received a BS/MS (High Honors) combined degree in Computer Science from Rochester Institute of Technology, NY, USA.

After completing his PhD, Dr. Yampolskiy held a position of an Affiliate Academic at the Center for Advanced Spatial Analysis, University of London, College of London. In 2008 Dr. Yampolskiy accepted an assistant professor position at the Speed School of Engineering, University of Louisville, KY. He had previously conducted research at the Laboratory for Applied Computing (currently known as Center for Advancing the Study of Infrastructure) at the Rochester Institute of Technology and at the Center for Unified Biometrics and Sensors at the University at Buffalo. Dr. Yampolskiy is also an alumnus of Singularity University (GSP2012) and a past visiting fellow of MIRI.

Dr. Yampolskiy’s main areas of interest are behavioral biometrics, digital forensics, pattern recognition, genetic algorithms, neural networks, artificial intelligence and games. Dr. Yampolskiy is an author of over 100 publications including multiple journal articles and books. His research has been cited by numerous scientists and profiled in popular magazines both American and foreign (New Scientist, Poker Magazine, Science World Magazine), dozens of websites (BBC, MSNBC, Yahoo! News) and on radio (German National Radio, Alex Jones Show). Reports about his work have attracted international attention and have been translated into many languages including Czech, Danish, Dutch, French, German, Hungarian, Italian, Polish, Romanian, and Spanish

Read more »

James Miller on Unusual Incentives Facing AGI Companies

 |   |  Conversations

rsz_11james-d-millerJames D. Miller is an associate professor of economics at Smith College. He is the author of Singularity Rising, Game Theory at Work, and a principles of microeconomics textbook along with several academic articles.

He has a PhD in economics from the University of Chicago and a J.D. from Stanford Law School where he was on Law Review. He is a member of cryonics provider Alcor and a research advisor to MIRI. He is currently co-writing a book on better decision making with the Center for Applied Rationality and will be probably be an editor on the next edition of the Singularity Hypotheses book. He is a committed bio-hacker currently practicing or consuming a paleo diet, neurofeedback, cold thermogenesis, intermittent fasting, brain fitness video games, smart drugs, bulletproof coffee, and rationality training.

 

Luke Muehlhauser: Your book chapter in Singularity Hypothesis describes some unusual economic incentives facing a future business that is working to create AGI. To explain your point, you make the simplifying assumption that “a firm’s attempt to build an AGI will result in one of three possible outcomes”:

  • Unsuccessful: The firm fails to create AGI, losing value for its owners and investors.
  • Riches: The firm creates AGI, bringing enormous wealth to its owners and investors.
  • Foom: The firm creates AGI but this event quickly destroys the value of money, e.g. via an intelligence explosion that eliminates scarcity, or creates a weird world without money, or exterminates humanity.

How does this setup allow us to see the unusual incentives facing a future business that is working to create AGI?


James Miller: A huge asteroid might hit the earth, and if it does it will destroy mankind. You should be willing to bet everything you have that the asteroid will miss our planet because either you win your bet or Armageddon renders the wager irrelevant. Similarly, if I’m going to start a company that will either make investors extremely rich or create a Foom that destroys the value of money, you should be willing to invest a lot in my company’s success because either the investment will pay off, or you would have done no better making any other kind of investment.

Pretend I want to create a controllable AGI, and if successful I will earn great Riches for my investors. At first I intend to follow a research and development path in which if I fail to achieve Riches, my company will be Unsuccessful and have no significant impact on the world. Unfortunately, I can’t convince potential investors that the probability of my achieving Riches is high enough to make my company worth investing in. The investors assign too large a likelihood that other potential investments would outperform my firm’s stock. But then I develop an evil alternative research and development plan under which I have the exact same probability of achieving Riches as before but now if I fail to create a controllable AGI, an unfriendly Foom will destroy humanity. Now I can truthfully tell potential investors that it’s highly unlikely any other company’s stock will outperform mine.

Read more »

MIRI’s July Newsletter: Fundraiser and New Papers

 |   |  Newsletters



Greetings from the Executive Director

Dear friends,

Another busy month! Since our last newsletter, we’ve published 3 new papers and 2 new “analysis” blog posts, we’ve significantly improved our website (especially the Research page), we’ve relocated to downtown Berkeley, and we’ve launched our summer 2013 matching fundraiser!

MIRI also recently presented at the Effective Altruism Summit, a gathering of 60+ effective altruists in Oakland, CA. As philosopher Peter Singer explained in his TED talk, effective altruism “combines both the heart and the head.” The heart motivates us to be empathic and altruistic toward others, while the head can “make sure that what [we] do is effective and well-directed,” so that altruists can do not just some good but as much good as possible.

As I explain in Friendly AI Research as Effective Altruism, MIRI was founded in 2000 on the premise that creating Friendly AI might be a particularly efficient way to do as much good as possible. Effective altruists focus on a variety of other causes, too, such as poverty reduction. As I say in Four Focus Areas of Effective Altruism, I think it’s important for effective altruists to cooperate and collaborate, despite their differences of opinion about which focus areas are optimal. The world needs more effective altruists, of all kinds.

MIRI engages in direct efforts — e.g. Friendly AI research — to improve the odds that machine superintelligence has a positive rather than a negative impact. But indirect efforts — such as spreading rationality and effective altruism — are also likely to play a role, for they will influence the context in which powerful AIs are built. That’s part of why we created CFAR.

If you think this work is important, I hope you’ll donate now to support our work. MIRI is entirely supported by private funders like you. And if you donate before August 15th, your contribution will be matched by one of the generous backers of our current fundraising drive.

Thank you,

Luke Muehlhauser

Executive Director

Read more »