November 2012 Newsletter

 |   |  Newsletters

Greetings from the Executive Director


Dear friends of the Machine Intelligence Research Institute,

My thanks to the dozens of staff members, contractors, and volunteers who helped make this year’s Singularity Summit our most professional and exciting Summit yet! Videos of the talks are now online, but I pity those who missed out on the live event and the killer lobby scene. We made more room in the schedule this year for mingling and networking, and everyone seemed to love it. After all, the future won’t be created merely by information and information technologies, but by the communities of people who decide to create the future together.

The Summit is a tremendous amount of work each year, and so it felt great to have so many people approach me to say, unprompted, “Wow, this is the best Summit yet!” and “You guys really took it to the next level this year; this is great!” I replayed those moments in my head on Sunday night as I drifted into the blissful coma that would repay several weeks of sleep debt.

Luke Muehlhauser


Singularity Summit Rocks San Francisco


The Singularity Summit 2012 was held at the Masonic Center in San Francisco on October 13-14, with an attendance of over 600 scientists, entrepreneurs, and thought leaders. We received media coverage from the BBC (online later this month), the Wall Street JournalThe Verge, and PolicyMic, and several other media outlets. The full program is online for your viewing enjoyment at Fora.tv. (If you like to watch talks at an accelerated playback speed, you can sign up for a free trial of Fora.tv, download the HD video, and play it with a program like VLC that allows you to adjust playback speed.)

We would like to thank everyone who participated this year, including all the speakers, attendees, and hard-working staff. Everyone we heard from was very impressed by the program and the quality of lobby networking and discussion. Stay tuned for announcements about the next Singularity Summit!


Workshop: Rationality for Entrepreneurs


On November 16-18, the Machine Intelligence Research Institute’s sister organization Center for Applied Rationality (CFAR) will be running an immersive rationality workshop in the San Francisco Bay Area for a select group of 25 entrepreneurs.

The workshop builds on CFAR’s previous rationality training retreats to present a curriculum that addresses the highest-priority improvements to reasoning for entrepreneurs. Small class sizes, interactive workshop activities, personalized attention inside and outside of class, and six weeks of regular followup are designed to help participants learn to actually use the techniques, rather than just know about them academically. The curriculum includes:

  1. What ideal decision-making looks like: when to trust your gut, and when to trust your head.
  2. How to learn about your own motivations and goals using principled thought experiments.
  3. How to make more accurate everyday predictions using Bayes’ Rule, a simple but powerful intuitive tool from probability theory.
  4. The science behind stress reactions, and how to make it easier to ask VCs for investments, customers for money, and your employees to go for equity.

Applications are still open. To fill out an application, or to find out more about the workshop, go to the CFAR website.


New Less Wrong Sequence from Eliezer Yudkowsky


Eliezer Yudkowsky returns to Less Wrong with a new sequence of articles, Highly Advanced Epistemology 101 for Beginners, which sets the stage for his next sequence, “Open Problems in Friendly AI.” The latter sequence will outline the mathematical and philosophical problems which need to be solved to make concrete progress on Friendly AI.

For an earlier article outlining some open problems in Friendly Artificial Intelligence, see Luke Muehlhauser’s So You Want to Save the World. These open problems include: developing a reflective decision theory, selecting ideal Bayesian priors, and ensuring that an AI’s utility function remains stable even under fundamental changes to the AI’s ontology.


New Volunteer Platform Launched!


Sign up here: www.singularityvolunteers.org

Over the past couple of months we thought hard about how to improve our volunteer program, with the goal of finding a system that makes it easier to engage volunteers, create a sense of community, and quantify volunteer contributions. After evaluating several different volunteer management platforms, we decided to partner with Youtopia — a young company with a lot of promise — and make heavy use of Google Docs.

Youtopia structures volunteer opportunities into challenges with associated activities. Completing activities earn volunteers points — which allows them to measure how much they are contributing relative to other volunteers (friendly competition encouraged) — and awards that showcase specific accomplishments. Leveraging Google Docs allows volunteers to work together more smoothly — in real-time or asynchronously.

It used to be that most volunteers were isolated from each other as they worked with different SI staff members on various projects. This made generating a sense of community difficult. We think a sense of community is important for long term volunteer engagement. Also, it was difficult to quantify the contributions made by our volunteers. Now, volunteers can see what their peers are working on, compete on challenges, and work on activities together — all while Youtopia makes it easy for us to quantify their contributions.

Here is a quote from Project Manager Malo Bourgon: “I’d strongly encourage everyone to head over to singularityvolunteers.org, register as a volunteer, and explore the challenges we currently have posted. As a small nonprofit, we literally have hundreds of hours of work we just can’t afford to do each month. Because of this, volunteers are really important to us; they really do make a meaningful impact.”


Singularity Rising Published


Singularity Rising, a new book by Smith College economics professor James D. Miller (author of Principles of Microeconomics), is now available for purchase. Here are some of the scenarios that Professor Miller considers in his new book:

  • A merger of man and machine making society fantastically wealthy and nearly immortal.
  • Competition with billions of cheap AIs drive human wages to almost nothing while making investors rich.
  • Businesses rethink investment decisions to take into account an expected future period of intense creative destruction.
  • Inequality drops worldwide as technologies mitigate the cognitive cost of living in impoverished environments.
  • Drugs designed to fight Alzheimer’s disease and keep soldiers alert on battlefields have the fortunate side effect of increasing all of their users’ IQs, which, in turn, adds a percentage points to worldwide economic growth.

Miller’s book has received glowing endorsements from Luke Muehlhauser, Paypal co-founder Peter Thiel, SENS foundation Chief Science Officer Aubrey de Grey, Humanity+ Chairman Natasha Vita-More, and novelist Vernor Vinge.


Register Now for Tickets to AGI-12!


The Fifth Conference on Artificial General Intelligence will be held at Oxford University this year, from December 8-11. AGI researchers will present and discuss their results from the last year. Register here.

Some of the speakers include David Hanson, CEO of Hanson Robotics, who will speak on humanoid robots and AGI; Angelo Cangelosi, professor of AI and cognition, who will speak on cognitive robotics; professor of cognitive science Margaret Boden, who will speak on creativity and AGI; and Nick Bostrom, professor of philosophy, who will speak on the future evolution of advanced AGIs and the dynamics of AGI goal systems.

Immediately following AGI-12 will be the first conference on AGI Impacts, organized and hosted by the Future of Humanity Institute. The keynote speakers will be Steve Omohundro and Bruce Schneier.

The Singularity Institute is sponsoring a $1,000 prize, the 2012 Turing Prize for Best AGI Safety Paper, for exceptional research on the question of how to develop safe architectures or goals for AGI.

We hope to see you in Oxford for these important conferences!


Michael Anissimov and Louie Helm to Speak at Humanity+ @ San Francisco


SI staffers Michael Anissimov and Louie Helm will give talks at the upcoming Humanity+ @ San Francisco conference on December 1-2, speaking alongside distinguished presenters such as Aubrey de Grey and David Pearce.

Michael Anissimov will speak on “The Media Performance of Transhumanism” while Louie Helm will speak on “The Mainstream Academic Publishing We Need”. Here is Michael Anissimov’s abstract:

Since 2005 or so, transhumanism and transhumanist ideas have had a rising profile in the media. What have been our greatest successes of the past few years and how can we repeat them? Which memes are getting the most airtime, and which are being ignored? Is more media exposure always better? What can we do to ensure that we and our organizations are media-savvy? How do we leverage technology to maximize the impact of social media? This talk by the media director of the Singularity Institute will examine these questions and come to concrete conclusions.

Here is Louie Helm’s abstract:

It may seem obvious to you that progress in fields like AGI and life extension will have the most long-lasting and far-reaching impact of any work you could possibly be doing right now. But what enabled you to realize that? Your path to understanding probably passed through a period of several months of independent self-study that required you to evaluate many informal arguments of varying quality, scattered across the internet. But if we want to attract more and better researchers, especially domain experts who don’t have hundreds of hours to study outside their field, we need to formally summarize our best ideas in standard academic style to help erase this enormous barrier to entry. Contributing to this effort by publishing current research is accessible to many and support is available for those who are motivated.

See other abstracts on the conference abstracts page. Tickets for the conference are available now.


Featured Volunteer: Tim Oertel


Every newsletter, we like to recognize a volunteer who has made a special contribution to the Singularity Institute. This month, we honor Tim Oertel for his proofreading work.

With the launch of the new website, the Singularity Institute has also republished many of it’s existing publications into SI’s new article template. Moreover, this year has also been a productive one in terms of new publications from SI staff and research associates. As such, volunteer proofreaders have never been more important to SI. Tim Oertel is one of SI’s leading proofreaders. Thank you for your excellent work, Tim!


Featured Summit Video: Luke Muehlhauser on
“The Singularity, Promise and Peril”


In  “The Singularity: Promise and Peril“, SI Executive Director Luke Muehlhauser explains the Singularity, its potential risks and benefits, and what we can do about it. Muehlhauser emphasizes that intelligence lies at the root of all technology, updating the familiar Arthur C. Clarke quote “Any sufficiently advanced technology is indistinguishable from magic”, with his new version, “Any sufficiently advanced intelligence is indistinguishable from magic.” For more of Luke’s views on the Singularity, we encourage you to read his e-book Facing the Singularity.


Featured Research Paper: “Learning What to Value”


In this research paper from 2011, SI research associate Daniel Dewey (now also a Research Fellow at the Future of Humanity Institute at Oxford University) outlines his concept of “value learners.” Here is a snippet from the paper’s abstract:

Reinforcement learning can only be used in the real world to deŀne agents whose goal is to maximize expected rewards, and since this goal does not match with human goals, AGIs based on reinforcement learning will often work at cross-purposes to us. To solve this problem, we deŀne value learners, agents that can be designed to learn and maximize any initially unknown utility function so long as we provide them with an idea of what constitutes evidence about that utility function.


News Items


First Genome Study to Sequence 1000+ Human Genomes
The Guardian, October 31, 2012

 

 


Self-driving cars now legal in California
CNN, October 30, 2012

 

 


Swarm Robots Cooperate with a Flying Drone
Wimp.com, October 23, 2012

 

 


IBM’s Watson Expands Commercial Applications, Plans to Go Mobile
Singularity Hub, October 14, 2012

 


Google Puts Its Virtual Brain Technology to Work
Technology Review, October 5, 2012

 

 


‘Green Brain’ Project to Create an Autonomous Flying Robot With a Honey Bee Brain
ScienceDaily, October 2, 2012

 

 


Thank You for Reading!

  • Cashme Morz

    Success of an ethical singularity will mainly depend on governments. The way the rich in USA and other parts of the world are are heading it will be to benefit the rich to become richer. A dictatorship, especially of the Muslim/sharia law type will pose great problems in this area.