Donor Story #1: Noticing Inferential Distance

 |   |  Guest Posts

2013 was by far MIRI’s most successful fundraising year (more details later), so now we’re talking to our donors to figure out: “Okay, what exactly are we doing so right?”

Below is one donor’s story, anonymized and published with permission:

My decision to donate was heavily dependent upon MIRI’s relationship with LessWrong. I did look into MIRI itself, perused the blog and the papers, and did some fact-checking. But this was largely sanity-checking after I had been convinced to donate by my interactions on LessWrong.

Initially, I wasn’t so much convinced to donate as I was convinced that FAI is a more pressing problem than my prior concerns. Once I believed this, it wasn’t a question of whether I was going to donate to FAI research but a question of where to focus my efforts.

I chose to donate to MIRI after it became apparent to me that

  1. Few people are aware of the problems posed by uFAI.
  2. Among those who are, many dismiss the problem after failing to understand it.

This perhaps sounds arrogant. Allow me to explain some:

Perhaps paradoxically, one of the biggest factors that made me trust MIRI was Holden Karnofsky’s critique of the organization. Holden’s tool AI suggestion and a number of his other objections seemed, to my eye, transparently foolish. Afterwards, I read the responses of Luke and Eliezer, who rejected these points for reasons similar to why I had rejected them. This went a long way towards convincing me that whatever organization Luke and Eliezer were working at was particularly important.

This scenario played out many times over, in big discussions and in small comments. I discovered that there were many people objecting to uFAI as a problem, and that the vast majority of them fundamentally misunderstood the concerns.

It’s not that I think Holden Karnofsky or other dissenters are stupid, nor misinformed. I respect Karnofsky in particular: he’s very intelligent and he’s doing incredible work. I certainly don’t mean to imply I’m smarter than him nor any others who question MIRI’s mission.

Rather, I have a history of arguing for uncouth ideas. I’ve often found myself on the far side of an inferential gap. I’ve long been convinced that smart people can fundamentally misunderstand the most important problems. Frequently I have tried to hone a strange idea, only to have discussion partners fall into the inferential gap four steps before we got the real questions.

Many of the discussions about FAI — Karnofsky’s critique primary among them — convinced me that FAI is the same sort of problem. The sort where people get caught up debating whether the problem is actually a problem, the sort where it’s very difficult to find people who are actually searching for solutions instead of debating whether solutions are necessary.

This was my primary reason for trusting MIRI: these discussions left me convinced that I should expect a dearth of experts who are appropriately concerned.

I didn’t expect there to exist organizations outside of MIRI (and FHI, and a few others in the same circles) that I could trust to address the problems as I see them. The number of smart people stumbling on inferential gaps made me cautious of other organizations sharing similar missions: even if their goals sounded good it would have been difficult to convince me that their leadership could avoid the common pitfalls.

By contrast, Luke, Eliezer, and Nick Bostrom demonstrated their abilities time and again, across many blog posts, discussions, and papers. They slowly built up the trust that I now place in MIRI and FHI.

I understand that this still sounds arrogant, and this was not lost on me. I was introduced to these issues via Eliezer’s writings, and thus I had to expect a bias towards Eliezer’s viewpoint. The fact that I was dismissing many arguments from very smart people with minimal consideration raised some red flags. The fact that the field was rife with misconception implied that I should anticipate misconceptions on my own part.

I was aware of all this, and I spent a long time trying to account for these points. The decision was not made lightly: it took four solid months of reflection, research, and thought before I was confident enough to donate to MIRI. In that timeframe I found more respect for the opposing views, but my conclusions did not change.

I came out the other end convinced that uFAI is a pressing problem, that FAI research is important, and that the field is full of landmines. MIRI remains one of very few organizations that has convinced me they have a chance of navigating this minefield successfully.

Of course, these conclusions were not quite so crisp in my head, at the time. There was always more thinking to do, more reflection to be had, more facts to check before moving piles of money around. And life kept on happening in the meantime: work and leisure took precedent over deciding where my money should go. It was the summer matching challenge that finally forced my hand, shook me from my reverie, and provided a convenient excuse to actually hit the button.