What Should I Fund?

I’ll probably have money to donate in the next year, and it’s the first time in a few years I’ve donated money. My thoughts on effective altruism (EA) have evolved in that time. I don’t have lots to donate, probably 3 to 4 figures in the next year. However, what I lack in money I make up for in access to information and insight. What I don’t donate to effective charity I’d like to donate to the sorts of things other effective altruists won’t tend to fund. If it turns out I think all my money should go to a project that isn’t done under the administration of registered NPO, the money I give away may not go to a charity at all. I’m uninterested on feedback of what existing EA organizations I should donate to, as I’m most confident in figuring that out myself, or asking those I trust for input. What’s more, I think it’s just as likely doing something new and unconventional will turn out to be the best giving opportunity for me in the next year as would be funding an existing project. Were to I fund an existing EA organization at this time, it’d likely be Rethink Charity to help get a new project up and running.

While I think it’s good the EA community is moving away from over-emphasizing earning to give, I think the golden age of effective altruists solving problems by merely throwing money at them still lays ahead of us. In particular, I think effective altruists who think the Open Philanthropy Project and the EA Funds will have blindspots need to think of creative new ways to fund community projects which will get overlooked. I want to put my money where my mouth is, and lead by example. So I’m writing a series of blog posts brainstorming unconventional funding strategies I may pursue. This project is in more of an “explore” than “exploit” phase right now. So, I’m focusing more on generating hypotheses and getting the conversation than figuring out which of these funding strategies would prove the most valuable. If you have any creative funding ideas, let me know. Otherwise, you can start reading here.


Creative Funding Ideas: Commissioning Blog Posts

Summary: I posit the effective altruism community still greatly undervalues the potential impact of high-quality blog posts, and that it should be a lot easier to produce them than it is now. I claim directly paying community members to write blog posts we want to see, not only as a one-off commission, but as a long-term strategy, could be an easy way to accomplish this. I discuss a couple examples of how I might personally do this in the near future to illustrate what I mean.

Note: here I discuss”commissioning” a blog post. This would entail contracting a specific individual to write a blog post, and a payment arrangement that would start during the production of the blog post, and conclude with the completion of the blog post. For my purposes, commissioning blog posts is the way I expect to go. However, all the advantages of privately commissioning blog posts can also be gained by running contests and prizes, which are also a great way to go.

That something is a “blog post” makes it sound as though we can’t expect it to be high impact. For people who’ve been in the EA community for a long time, I expect them to see past this skepticism of the value of blogs. The movement owes its existence to blogs. People also seem to underestimate how blog posts generate value in EA. If a blog post isn’t so exciting and ground-breaking it gets hundreds of likes, shares, clicks, or upvotes, what’s the point?

Well, I can tell you there are lots of blog posts on the EA Forum, which gets relatively little traffic as is, that don’t have that many upvotes. However, it’s often what we don’t see that counts. If one of the upvotes for the in-depth coverage of an obscure topic from the EA Forum came from an Open Philanthropy Project (Open Phil) program officer, and it changes their decision-making process for their focus area, that post is probably worth more than a Facebook post garnering likes from one thousand effective altruists, but which that program officer missed. This isn’t hypothetical. Over the last few weeks I’ve been watching the talks from EA Global San Francisco 2017, and multiple times Open Phil staff have mentioned EA Forum posts which have led to shifts in their thinking. This should be a big deal to everyone in the community. It doesn’t matter what looks like is happening on the EA Forum, or what someone else tells you about it. If the individuals who can move 1000x more money than anyone else in the EA community for their own focus area get their updates from the community through the EA Forum, everyone needs to dramatically update on the value of generating high-quality posts on the EA Forum.

So, high-impact blog posts happen. How do we reliably produce them though? By finding good bloggers and paying them money to write on a topic. I am talking about, if necessary, literally transferring cash I earned into the bank accounts of private individuals on the completion of a blog post at my request. I don’t know why more effective altruists don’t do this. A friend said they’re worried if the current heavy hitters in the rationality blogosphere don’t post to LessWrong 2.0 it won’t take off. Here is the chain of thought I immediately reacted with.

  1. “The relaunch of LW2.0 is obviously very valuable.”
  2. “Talking about how much we’re not going to do anything doesn’t seem productive.”
  3.  “By the time LW2.0 is in full swing, I will have enough money to motivate other aspiring rationalists to blog on LW2.0”

This isn’t something I’m thinking about abstractly. If within a couple weeks of the LW2.0 open beta if it doesn’t look like there’s enough quality activity on the site, I am strongly considering unilaterally and privately incentivize such by paying people to blog. Maybe they’re an underemployed rationalist who blogs because they’ve got nothing better to do, and the idea of getting paid to do it is the bee’s knees. Maybe they’re a software engineer or entrepreneur who is between projects or jobs right now, has a higher net worth than me, but has too much of an ugh field around blogging on LW2.0. In that case, I’ll overcome their lack of intrinsic motivation to blog with an extrinsic motivation via cash injection.

Another example of what kinds of blog posts I might commission: posts aiming at posing solutions to issues of diversity in the EA community. Diversity is a hard issue to talk about. I think effective altruism does a better job than most communities, but I still see us only pointing out diversity problems exist, suggesting why the problem might exist, and talking about all the ways more diversity would be great. What we should strive to do is hold off on proposing solutions to diversity issues in EA until we’ve discussed the problems. Because intersectionality, each axis of diversity might require it’s own solution. Talking about diversity issues is always hard, so it’s probably best to start a big conversation about these topics with a great blog post that sets the standard for the discussion well. After those discussions, we can focus on producing more blog posts that propose solutions. Race, class, gender, LGBTQ+, disability and nationality might each demand multiple blog posts to tackle.

I’m counting at least a dozen blog posts to really do justice to the full scope of diversity issues and social dynamics in the EA community. Would I write them all myself? Am I even qualified? How many dozens or hundreds of hours of other material might I have to read to treat these complex issues with the sensitivity, respect and dignity they deserve?

Instead of answering those questions myself, there are lots of people who’ve already done the work to answer those questions for themselves, and have a comparative advantage in doing it. What’s more, I can take an executive role over the project of blog sequence management. Writing a whole series of blog posts can be difficult. But if I pay one person to write one blog post and then they run out of spoons for the next one, I can hire and direct a different person to take over the series.

Finally, another advantage for effective altruists in paying for blog posts they want is they can use price discrimination. If you want to pay a Ph.D. already working at multiple think tanks to do a shallow review or deep dive into a cause, you probably can’t afford them. But at any given time there are hundreds if not thousands of undergrads who can do work which accomplishes the same goal for a fraction of the price. If I or someone else streamlined the sorts of processes I’ve described above, we’d reach economies of scale in blogging not seen since the heyday of LessWrong.

In reading this, I hope you get the gist of how commissioning blog posts could be an effective giving choice. Please make other suggestions for topics/causes/issues you’d like to see blog posts commissioned for, and which effective altruist you think could do it best. There’s a good chance I’ll pay them to do it.

Online Discourse Is the Tip of the Iceberg

This is in response to a post in the Effective Altruism Facebook group.

>I’ve noticed a recurring problem with discussions in EA Facebook groups, and really this is a problem with discussions on all Facebook groups focused on discussing and debating serious, smarty pants topics.

>People say really mean things.

>Sometimes it is blunt. Sometimes it is passive-aggressively wrapped up in diplomatic rational speak. Either way, it makes me want to light a house on fire.
Here’s the bottom line. It is impossible to engage in a productive or worthwhile conversation with someone who, subtedly or overtly, challenges your intelligence, blanketly dismisses your ideas, or otherwise treats you without respect. Meanness salts the earth of fruitful discourse.

>When people talk down to each other, many people (most people?) disengage. It becomes too stressful. Even the second-hand degradation is uncomfortable to watch, and being directly degraded is unbearable. Inevitably, some people will continue to engage, but it will be the people with the most aggressive, adversarial, and abrasive conversational style. This is what I often see happening in EA discussion groups (and other discussion groups too). The most engaged participants are the ones who are rude and condescending to the people they disagree with.

In-person interaction, and human relationships in general, don’t have all the same features as online discourse. EA is a community which takes discussing and debating serious, smarty-pants topics to the level you do something about, that some people put their all into, and a way of life. Having been part of the EA community since 2011, knowing hundreds of effective altruists, and hearing stories about experiences which are similar to my own observations, I notice the same thing happens in person. Effective altruists not only communicate but in their behaviour over extended periods of time often can  be blunt, passive-aggressive, mean, disrespectful and dismissive. Meanness salts the earth of fruitful cooperation.

As effective altruists are drawn to hubs where they live, work and date each other when they’re already part of the online bubble that is EA to boot, the relationship an individual has with a whole community can distort their lives. As online conversations and personal experiences compound, they just become our lives. So effective altruists can end up living lives where they feel the people they’ve joined are all this.

For years EAs have talked about improving online discourse, but there’s lots that even if it somehow does make it onto the internet doesn’t go public before the whole movement. And it shouldn’t. I think local EA communities resolve their interpersonal conflicts one way or another, and whatever means local groups can generate to resolve their own problems may be the best we can hope for. Beyond that, the best I can think of is some effective altruists start a project suggesting or recommending ways of communicating and being around one another which are more compassionate. We can’t force that on people. However, I don’t want effective altruists to project that poor online discourse in the community is utterly unrelated to what happens in those small local groups, or private mailing lists, or at meetups, or in our daily lives as EA becomes our daily lives.

Ultimately, solving communication or coordination problems isn’t some uniquely intellectual task. It’s a human problem. Nobody has this all figured out. To the extent in person and online effective altruists are finding solutions with how to get along better, we’re finding solutions for how people get along better, period. This is something the rationality community has focused on for its own purposes more than the EA community. It’s not my impression the rationality community is particularly better or worse than anyone else at having its members get along better. They’ve certainly found or generated tools for doing so, but it takes lots of concentration and focus to utilize them well. What tools they’ve borrowed from, e.g., different therapeutic paradigms I don’t see the rationality community having a better knack for it than those already working in caring professions.

What I think the rationality community is good at with helping people get along is showing each other how to get along with a big group in novel social arrangements. To some extent EA and lots of other social movements like it are unprecedented in modern history, and we’re thrust together in ways our intuitions and traditions don’t prepare us for. The rationality community seems focused on pulling on lots of working practices for improved communication and applying them to the unusual situations and lifestyles they find themselves in, because rationalists try unusual things. Doing the most good is about doing good in unusual ways too, or else it wouldn’t be necessary.

Improving online discourse is hard. Discourse in EA seems good enough, though, for the most part people are sticking around. It’s working decently, but I think EAs need to take stock that this is a community for which a lot of us we’ll know each other for the rest of our lives. Solving all global problems seems like it may take awhile. Solving online discourse problems is a surface layer which will become less important than the way of life and culture EA is becoming. It’s not too late to shift those norms for the better. I think in time how we learn to get along everywhere will dominate considerations of how we learn to get along online

Again, I want to emphasize this all needs to be acknowledged, but a lot of this is part of the human condition. I think EAs get so excited by the ideas and the projects and the people we hope we can find a superhuman way of solving interpersonal problems too. EA is a movement which generates families, but it won’t be a movement which can prevent divorces. EA is a movement which gives people families of choice, but it won’t be a movement which can prevent all those families from splitting up like any other family might. All we can do is try.

The Worst Arguments in Effective Altruism

Every once in a while, an aspiring rationalist will shine a light on history and put to bed a general type argument commonly used on the internet because it stopped contributing to raising the sanity waterline long ago, if they ever did at all. That aspiring rationalist is usually Scott Alexander.

Today I’m going to try something similar for effective altruism (EA). I’m going to declare some arguments “retired”, in a hope people won’t use them anymore. I don’t see them used as much as I used to, but I want to explain why in the context of EA they don’t change anyone’s perceptions, and why using them is a waste of time.

They’re forms of the non-central fallacy, i.e., the worst argument in the world, as put forward by Scott himself.  Here is why in the context of effective altruism, arguing against an idea either because “it’s too sci-fi”, or its original source is “just a blog”, is the worst argument in the world.

Continue reading

The State of Reducing Wild Animal Suffering

Note: this post was originally written with those organizations and projects in mind originating from within the effective altruism (EA) movement. These organizations are those exclusively focused on reducing the suffering of wild animals from naturogenic causes, i.e., sources of suffering originating in nature itself, as opposed to anthropogenic sources, i.e., caused by human action. Multiple people have pointed out to me there are of course many people working on reducing wild animal suffering from all manner of causes. Were this community to expand, what types of wilderness interventions from both within and outside the EA movement would need to be fleshed out.

This is the state of the community as of July 2017.

In the effective animal activism/advocacy (EAA) movement, there have been some organizations born out of effective altruism that didn’t prior exist before in the broader animal welfare/rights/liberation movement. There are lots of good projects, but it seems these organizations were uniquely suited to making a real go out of reducing wild animal suffering (RWAS) as a cause at a time when it seemed most possible. In the meantime, though, multiple organizations have deprioritized RWAS as changes in EAA have occurred. While there are many great organizations which continue to focus on farmed animal welfare out of EAA, at first glance it seems like virtually none are prioritizing RWAS. This is concerning, so I’m making a summary for everyone of as far I can tell what organizations still do or don’t focus on RWAS.

I’ve talked to Brian Tomasik, and he told me the Foundational Research Institute is no longer doing research on wild animal suffering. I learned in post introducing Sentience Institute, a research-focused think tank, written by Jacy Reese that they’ll be splitting from Sentience Politics, which is prioritizing initiatives in the German-speaking world, which is in turn splitting from the Effective Altruism Foundation (EAF). Sentience Institute and Sentience Politics will be focusing on farmed animal welfare for the foreseeable future, although focusing on wild animal suffering is still part of their respective missions as a potential future focus. EAF has two researchers focused RWAS at present. EAF and Raising for Effective Giving continue to recommend charities in any cause that’s evaluated by effective altruists, such as Animal Charity Evaluators. So, if a charity focusing on RWAS somehow stands out among all the other charities EAs focus on, I’m sure they’ll recommend it too.

Animal Ethics is an organization focusing on RWAS. Sentience Politics also employs two researchers as part of an ongoing RWAS research project. Andres Gomez Emilsson organizes the movement around the Hedonistic Imperative, as laid out by David Pearce, which certainly focuses on RWAS. However, a lot of the groundwork to organize the movement to reduce wild animal suffering; to build its capacity to research, prioritize and evaluate; and to raise awareness and generate resources remains to be done. Small-scale research analysis, which is the greatest extent of real RWAS activity, doesn’t lend itself to the galvanization of what seems like more effective altruists and effective animal advocates taking the issue seriously than ever before. I think it’s several hundred people, or maybe even a couple thousand. That’s been enough to launch any other cause in EA to vaunted or central status in the movement, and thus moreso in the world around it.

What’s more, there is lots of existing knowledge on RWAS that, while in need of being compiled into a neat reading list, can be acted upon. While there is need to fund knowledge production on RWAS, taking this cause beyond theory is something that can be done but nobody has a roadmap to. The current landscape of RWAS is very small, and is too small to grow on its own. There are a few Facebook groups focused on it (feel free to link them in the comments). If there is any organization or project focusing on reducing wild animal suffering I missed, please let me know.

Otherwise, as far as I know, that’s it. There is no other coordination or organization of efforts to reduce wild animal suffering. I guess this post is a warning signal that if there isn’t some community investment in the cause now, it will never happen. If you’re still reading this, there are so few of us left input from anyone is valuable and necessary. Feel free to share and use this post as a tentpole for figuring out where we can go from here.

Operationalize the Harm

[content warning: discussion of paedophilia, homophobia]
I’ve been trying to expand my filter bubble these last few months. I’ve been watching some right-wing YouTubers, and while some of them have alright commentary, a lot of it seems not backed up by numbers at best. For example, I was watching a video talking about how leftist politics is incubating some growing movement of activists trying to get people to accept non-offending paedophiles coming out and seeking help in public, and praising them as virtuous for doing so. Examples of these things happening are given to prove they technically exist, but these things blow up in the news and get blown out of proportion. For all manner of reasons one might oppose a public policy of paedophiles seeking treatment in this way, and it’s easy to see how a virtual mass hysteria can be generated, and stereotypes perpetuated.
That there are a non-zero number of politically left-wing activists also part of this paedophilia acceptance(?) movement isn’t useful data about how big or imminent a threat it is, or what kind of threat it is so as to effectively address it. These moral panics seem pretty standard in the news cycle these days. It’s hard to tell which ones are actively the worst. The same sort of thing happens on the left. Some people on the left have been using the actions of the Westboro Baptist Church to stereotype all manner of American Christians for years. That’s despite the fact the Westboro Baptist Church is a small but loud organization which does all sorts of things all manner of Christian would reject anyway. I guess you could say, technically, this means there are a non-zero number of right-wing activists who are dangerously homophobic, but that hardly means there’s a threat of the tide turning against progressive popular opinions overnight. The same goes for these paedophilia activists.

We need to substantiate the claims of alleged threat with evidence, because if we end up very wrong, it’s just a waste of resources on top of everything else.

My Opinion on The Gender Wage Gap

A friend asked me my opinion on the gender wage gap. Below is my response.

Literally everybody who cites statistics not from social scientist on all sides tends to use crap numbers that miss the bigger picture. Lots of social scientists from all kinds of fields also have crappy numbers that mischaracterize reality. Bundling all types of jobs together when different types of careers and occupations are developed on the basis of all kinds of siffereny local or workplace cultures is nonsense. Basically, obviously there will be industries where controlling for everything else some systemic type of sexism is the remaining factor controlling for wage gaps or comparable metrics of equity like authority in the workplace. Unfortunately, this level of nuance is lost in the broader culture wars by virtually literally everyone who isn’t discussing all the above content in the context of research.

What anyone ought do in practice, on a piecemeal basis, identifying those industries wherein sexism is legitimately having the worst consequences where nobody has tried anything, for which it actually matters[1], and which something can be done about it without systemically changing the values of everyone in society, it should be done. This is started by reaching out to the people in and around those industries most likely in practice to be willing and able to do something about it. Lots of people these days say action isn’t worth it if it doesn’t result in systemic change. I’ve talked to all manner of people of every perspective on this issue, and from now angle is that position seen as sound by anyone else, so I reject it outright.

There are people who do urge for systemic change, and put their money where their mouth is. This is a difference between liberal, Marxian and radical variants of feminism I don’t know enough about to follow the state of discourse at this time, but I encourage its continuation. Of course, by this point I’m entailing commentary on the state of organization and discourse on the political left in general, which is a separate topic.

[1] I am not going to become upset about the inequality between men and women in the field of midwifery, and I’m not going to become upset the more men than women are firefighters when any firefighter regardless of self-identified sex or gender must be constantly meet some minimum threshold for fitness to be able to reliably and sufficiently perform their jobs in emergency situations, and it just so happens men are more likely to be able to run while lifting 300 lbs. people on their shoulders than women.

Why I Mix Humour With Seriousness

A friend recently made the following observation about me.

I think you pretend you’re joking a lot more often than you actually are. Most people use humor this way sometimes, but I think you’re doing it as a primary strategy.

Since they were the first person who cared enough to point this out, I not only confirmed my friend’s suspicion, but explained why I behave in this manner. It goes as follows.

 It’s true that I pass myself off as joking more often than I actually am. What’s going on is on the object-level of a certain discussion or framework, I’m being serious. However, on the meta-level, I don’t necessarily believe these things. That is, I take ideas seriously, without believing them myself. It’s my experience virtually no amount of me not saying I sincerely believe something, or me saying I sincerely disbelieve something, actually causes everyone to react in a manner with them consistent with them believing me to be an honest person.
(The perception) of threatening or harmful lying causes one to reliably lose status from everyone. Harmless or non-threatening lying is usually neutral regarding one’s status with most others, and may even increase status. Being upfront and explicit about this minimizes the probability of losing status. At least, that’s my theory. Anyway, joking is the more reliable way to explicitly signal the lie one is perpetuating is non-threatening or harmless. I also have a comparative advantage in it.
People tend not to trust people who are never sincere. If I’m always joking, people will think of me as never being sincere, which would be bad for me. So, the only way to plausibly maintain I’m joking about things at the rate I do without losing in the trade-off of people not wanting to interact with me when they want a sincere interlocutor is to not be sure if I myself am joking or not. It’s my impression people will end up finding out if one is lying is not. So, to be able to credibly claim I’m not sure if I’m being serious or not, I must actually not be able to tell if I’m being serious or not. So, I try not to take myself too seriously.
 Signaling that I don’t strongly hold beliefs that would pose a threat to any of you, other things being equal, minimizes the probability any of you would harass, offend, stigmatize, dislike, hate, me, etc.
 This is a habit I unthinkingly maintain across the whole internet, although if you’d prefer, I’ll make an effort to keep my guard down here, and not pretend to be joking at all when I myself am completely serious.
Another thing is I tend to mix humour and seriousness in much of what I write. I mean, sometimes I’m trying to be intentionally funny, and sometimes I find the most natural way to write on a subject in my own voice is one which, while making serious points, also comes across as playful and/or funny to myself or others. I work on multiple semantic layers. Like, the deeper structure of my opinion is that I obviously still think, all things considered, civnat is ambijectively superior to ethnat, or at least the arguments for ethnat I’ve seen so far aren’t sufficient to change my mind. The humour in my writing exists for social purposes, like indicating that just because I have some non-trivial (even if practically meaningless) disagreements with friends over politics doesn’t mean I think any less of them as people, or that I otherwise intend to treat them with less friendliness in future interactions.

On Modeling Myself

I tend to confuse people, or at least tend to give people one impression when what they think I’m doing, or what I’m actually doing, is something else. Lots of people in the rationality community try modeling each other, and themselves, and supposedly this helps us interact with one another. I have implicit models of people I work off of, which will probably form regardless of whether I want them to or not. I used to try acknowledging or noticing my models of other people to make them explicit, but I found this wasn’t useful. I’ve checked with others, and the perceptions or models others have of me are about what I expected. I think I could model myself as accurately as anyone else could model me, or as accurately as anyone else could model themselves, but I don’t know what I’d use such a model for, so I don’t make much of an effort to maintain one.

My whole life people have been impressing upon me the importance of self-esteem, or at least awareness of self-image. These ideas as atomic concepts haven’t been useful to me, so I’ve made an effort of dissolving these concepts, and then disregarding notions of self. That’s not to say I disregard myself, but that the best way for me to achieve non-attachment is to realize self-identity isn’t a fundamental part of my mind. Lots of people who know me would probably call me out as someone who doesn’t seem all that self-possessed, and so while I may aspire to ego death as much as anyone, I’m probably not great at achieving it.

My typical response is much like the following: while self-identify isn’t a fundamental part of my mind, it’s still a part of my mind. The only reason I’d dissolve the self is for my own purposes, or some purpose I see as greater than myself, and not subject to the mere whims of others. So, with regards to what ontology it can be said “Evan”-as-subject really, truly, objectively exists in, while I do care in the abstract, I don’t care enough in the context of any particular conversation to qualify it to others, who aren’t entitled to an opinion on the nature of my own consciousness.

That all stated, it’d seem mutually beneficial to myself and others for others to have a better model of me such that our interactions go more smoothly. So, this post will be the first in a series called “Explaining Evan”, in which I attempt to explain. from my own perspective, how and why I am the way I am, and why I behave in the manners I do.