No Positive Reason to Suspect OpenPhil of Wrongdoing

I’m not too concerned with OpenPhil’s grant to OpenAI. I don’t expect the potential conflicts of interest amount to anything. I think there’s a perverse outcome where everyone is self-conscious except OpenPhil/Givewell in this regard. Like, had they not disclosed potential conflicts of interest, nobody would’ve noticed. Relationships existing between two different teams in effective altruism in a professional capacity when there are close mutual friends or spouses working at either organization has been something which has happened in EA for years. It happens in every community in every culture on Earth. It’s not always nepotism.
 
Humans are going to be human. If we want to act surprised people will be drawn and form close bonds with people who share their attitudes, interests and values; whom they spend lots of time with; and whom they’re forced to be exposed to in the course of their daily lives, we’re phoning it in. That’s not to say the questions are worth asking in the first place.
 
What I’m saying is that if it’s something we’re hemming and hawing over when effective altruism is already a community where I expect its members will ultimately tread lightly and not be outrageously disrespectful, we might as well be candid about our concerns. There’s no point being polite for politeness’ sake alone if what you’re doing is concern-trolling or passively-aggressively expecting some subtext to be noticed.
 
The culture of EA is sociopolitically dominated by secular liberalism. I think there’s this self-consciousness in that mindset where the ideal is for equality of opportunity, which we hope leads to equality of outcome, but often doesn’t. When things don’t line up the way we hoped for despite everyone’s best efforts to set up an (as close as was realistic to) objectively fair system, we’re apprehensive. We fear the problem might be us, and that we’re unable to live up to our ideals.
 
I don’t suspect Holden, or anyone at OpenAI or OpenPhil, is culpable in nepotism, implicit cognitive favouritism, of failing their responsibility to try in earnest to do the most good. I think there’s a lot about this grant OpenPhil isn’t disclosing, and it’s odd. I think perhaps they’ve made poor judgement calls or errors in reasoning we can sensibly disagree over. I think the apparent gaps in what OpenPhil’s doing here with such a grant to OpenAI may be filled with concerns over misdeed, but what real mistakes are there are of the far simpler sort everyone makes.

Leave a comment