# OpenAI and other news

|   |  News

We’re only 11 days into December, and this month is shaping up to be a momentous one.

On December 3, the University of Cambridge partnered with the University of Oxford, Imperial College London, and UC Berkeley to launch the Leverhulme Centre for the Future of Intelligence. The Cambridge Centre for the Study of Existential Risk (CSER) helped secure initial funding for the new independent center, in the form of a $15M grant to be disbursed over ten years. CSER and Leverhulme CFI plan to collaborate closely, with the latter focusing on AI’s mid- and long-term social impact. Meanwhile, the Strategic Artificial Intelligence Research Centre (SAIRC) is hiring its first research fellows in machine learning, policy analysis, and strategy research: details. SAIRC will function as an extension of two existing institutions: CSER, and the Oxford-based Future of Humanity Institute. As Luke Muehlhauser has noted, if you’re an AI safety “lurker,” now is an ideal time to de-lurk and get in touch. MIRI’s research program is also growing quickly, with mathematician Scott Garrabrant joining our core team tomorrow. Our winter fundraiser is in full swing, and multiple matching opportunities have sprung up to bring us within a stone’s throw of our first funding target. The biggest news, however, is the launch of OpenAI, a new$1 billion research nonprofit staffed with top-notch machine learning experts and co-chaired by Sam Altman and Elon Musk. The OpenAI team describes their mission:

Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely.

I’ve been in conversations with Sam Altman and Greg Brockman at OpenAI as their team has come together. They’ve expressed a keen interest in making sure that AI has a positive impact, and we’re looking forward to future collaborations between our teams. I’m excited to see OpenAI joining the space, and I’m optimistic that their entrance will result in promising new AI alignment research in addition to AI capabilities research.

2015 has truly been an astounding year — and I’m eager to see what 2016 holds in store.

• casebash

Do you believe that Open AI as actually a sensible idea given how powerful it could potentially be? Giving everyone an AI is quite possibly equivalent to giving everyone the ability to produce biological weapons, given that the AI could teach them how to do that. Isn’t it better to regulate it like nuclear weapons, with only a few parties having access to them?

• http://mindey.com/ Mindey

I hope they will focus on applications of AI to general global collaboration and consensus-building, and this way, empower mankind as a whole to deal with all kind of problems first… rather than letting the technology go into the unforeseen applications.

• Chris Emery

The cat is out of the bag…what we see is probably a decade behind what the militaries own….a decade with an unlimited budget is a looong time with this technology.
Operation Jade Helm…control of the human dimension…