Online Discourse Is the Tip of the Iceberg

This is in response to a post in the Effective Altruism Facebook group.

>I’ve noticed a recurring problem with discussions in EA Facebook groups, and really this is a problem with discussions on all Facebook groups focused on discussing and debating serious, smarty pants topics.

>People say really mean things.

>Sometimes it is blunt. Sometimes it is passive-aggressively wrapped up in diplomatic rational speak. Either way, it makes me want to light a house on fire.
Here’s the bottom line. It is impossible to engage in a productive or worthwhile conversation with someone who, subtedly or overtly, challenges your intelligence, blanketly dismisses your ideas, or otherwise treats you without respect. Meanness salts the earth of fruitful discourse.

>When people talk down to each other, many people (most people?) disengage. It becomes too stressful. Even the second-hand degradation is uncomfortable to watch, and being directly degraded is unbearable. Inevitably, some people will continue to engage, but it will be the people with the most aggressive, adversarial, and abrasive conversational style. This is what I often see happening in EA discussion groups (and other discussion groups too). The most engaged participants are the ones who are rude and condescending to the people they disagree with.

In-person interaction, and human relationships in general, don’t have all the same features as online discourse. EA is a community which takes discussing and debating serious, smarty-pants topics to the level you do something about, that some people put their all into, and a way of life. Having been part of the EA community since 2011, knowing hundreds of effective altruists, and hearing stories about experiences which are similar to my own observations, I notice the same thing happens in person. Effective altruists not only communicate but in their behaviour over extended periods of time often can  be blunt, passive-aggressive, mean, disrespectful and dismissive. Meanness salts the earth of fruitful cooperation.

As effective altruists are drawn to hubs where they live, work and date each other when they’re already part of the online bubble that is EA to boot, the relationship an individual has with a whole community can distort their lives. As online conversations and personal experiences compound, they just become our lives. So effective altruists can end up living lives where they feel the people they’ve joined are all this.

For years EAs have talked about improving online discourse, but there’s lots that even if it somehow does make it onto the internet doesn’t go public before the whole movement. And it shouldn’t. I think local EA communities resolve their interpersonal conflicts one way or another, and whatever means local groups can generate to resolve their own problems may be the best we can hope for. Beyond that, the best I can think of is some effective altruists start a project suggesting or recommending ways of communicating and being around one another which are more compassionate. We can’t force that on people. However, I don’t want effective altruists to project that poor online discourse in the community is utterly unrelated to what happens in those small local groups, or private mailing lists, or at meetups, or in our daily lives as EA becomes our daily lives.

Ultimately, solving communication or coordination problems isn’t some uniquely intellectual task. It’s a human problem. Nobody has this all figured out. To the extent in person and online effective altruists are finding solutions with how to get along better, we’re finding solutions for how people get along better, period. This is something the rationality community has focused on for its own purposes more than the EA community. It’s not my impression the rationality community is particularly better or worse than anyone else at having its members get along better. They’ve certainly found or generated tools for doing so, but it takes lots of concentration and focus to utilize them well. What tools they’ve borrowed from, e.g., different therapeutic paradigms I don’t see the rationality community having a better knack for it than those already working in caring professions.

What I think the rationality community is good at with helping people get along is showing each other how to get along with a big group in novel social arrangements. To some extent EA and lots of other social movements like it are unprecedented in modern history, and we’re thrust together in ways our intuitions and traditions don’t prepare us for. The rationality community seems focused on pulling on lots of working practices for improved communication and applying them to the unusual situations and lifestyles they find themselves in, because rationalists try unusual things. Doing the most good is about doing good in unusual ways too, or else it wouldn’t be necessary.

Improving online discourse is hard. Discourse in EA seems good enough, though, for the most part people are sticking around. It’s working decently, but I think EAs need to take stock that this is a community for which a lot of us we’ll know each other for the rest of our lives. Solving all global problems seems like it may take awhile. Solving online discourse problems is a surface layer which will become less important than the way of life and culture EA is becoming. It’s not too late to shift those norms for the better. I think in time how we learn to get along everywhere will dominate considerations of how we learn to get along online

Again, I want to emphasize this all needs to be acknowledged, but a lot of this is part of the human condition. I think EAs get so excited by the ideas and the projects and the people we hope we can find a superhuman way of solving interpersonal problems too. EA is a movement which generates families, but it won’t be a movement which can prevent divorces. EA is a movement which gives people families of choice, but it won’t be a movement which can prevent all those families from splitting up like any other family might. All we can do is try.

The Worst Arguments in Effective Altruism

Every once in a while, an aspiring rationalist will shine a light on history and put to bed a general type argument commonly used on the internet because it stopped contributing to raising the sanity waterline long ago, if they ever did at all. That aspiring rationalist is usually Scott Alexander.

Today I’m going to try something similar for effective altruism (EA). I’m going to declare some arguments “retired”, in a hope people won’t use them anymore. I don’t see them used as much as I used to, but I want to explain why in the context of EA they don’t change anyone’s perceptions, and why using them is a waste of time.

They’re forms of the non-central fallacy, i.e., the worst argument in the world, as put forward by Scott himself.  Here is why in the context of effective altruism, arguing against an idea either because “it’s too sci-fi”, or its original source is “just a blog”, is the worst argument in the world.

Continue reading

The State of Reducing Wild Animal Suffering

Note: this post was originally written with those organizations and projects in mind originating from within the effective altruism (EA) movement. These organizations are those exclusively focused on reducing the suffering of wild animals from naturogenic causes, i.e., sources of suffering originating in nature itself, as opposed to anthropogenic sources, i.e., caused by human action. Multiple people have pointed out to me there are of course many people working on reducing wild animal suffering from all manner of causes. Were this community to expand, what types of wilderness interventions from both within and outside the EA movement would need to be fleshed out.

This is the state of the community as of July 2017.

In the effective animal activism/advocacy (EAA) movement, there have been some organizations born out of effective altruism that didn’t prior exist before in the broader animal welfare/rights/liberation movement. There are lots of good projects, but it seems these organizations were uniquely suited to making a real go out of reducing wild animal suffering (RWAS) as a cause at a time when it seemed most possible. In the meantime, though, multiple organizations have deprioritized RWAS as changes in EAA have occurred. While there are many great organizations which continue to focus on farmed animal welfare out of EAA, at first glance it seems like virtually none are prioritizing RWAS. This is concerning, so I’m making a summary for everyone of as far I can tell what organizations still do or don’t focus on RWAS.

I’ve talked to Brian Tomasik, and he told me the Foundational Research Institute is no longer doing research on wild animal suffering. I learned in post introducing Sentience Institute, a research-focused think tank, written by Jacy Reese that they’ll be splitting from Sentience Politics, which is prioritizing initiatives in the German-speaking world, which is in turn splitting from the Effective Altruism Foundation (EAF). Sentience Institute and Sentience Politics will be focusing on farmed animal welfare for the foreseeable future, although focusing on wild animal suffering is still part of their respective missions as a potential future focus. EAF has two researchers focused RWAS at present. EAF and Raising for Effective Giving continue to recommend charities in any cause that’s evaluated by effective altruists, such as Animal Charity Evaluators. So, if a charity focusing on RWAS somehow stands out among all the other charities EAs focus on, I’m sure they’ll recommend it too.

Animal Ethics is an organization focusing on RWAS. Sentience Politics also employs two researchers as part of an ongoing RWAS research project. Andres Gomez Emilsson organizes the movement around the Hedonistic Imperative, as laid out by David Pearce, which certainly focuses on RWAS. However, a lot of the groundwork to organize the movement to reduce wild animal suffering; to build its capacity to research, prioritize and evaluate; and to raise awareness and generate resources remains to be done. Small-scale research analysis, which is the greatest extent of real RWAS activity, doesn’t lend itself to the galvanization of what seems like more effective altruists and effective animal advocates taking the issue seriously than ever before. I think it’s several hundred people, or maybe even a couple thousand. That’s been enough to launch any other cause in EA to vaunted or central status in the movement, and thus moreso in the world around it.

What’s more, there is lots of existing knowledge on RWAS that, while in need of being compiled into a neat reading list, can be acted upon. While there is need to fund knowledge production on RWAS, taking this cause beyond theory is something that can be done but nobody has a roadmap to. The current landscape of RWAS is very small, and is too small to grow on its own. There are a few Facebook groups focused on it (feel free to link them in the comments). If there is any organization or project focusing on reducing wild animal suffering I missed, please let me know.

Otherwise, as far as I know, that’s it. There is no other coordination or organization of efforts to reduce wild animal suffering. I guess this post is a warning signal that if there isn’t some community investment in the cause now, it will never happen. If you’re still reading this, there are so few of us left input from anyone is valuable and necessary. Feel free to share and use this post as a tentpole for figuring out where we can go from here.

Operationalize the Harm

[content warning: discussion of paedophilia, homophobia]
I’ve been trying to expand my filter bubble these last few months. I’ve been watching some right-wing YouTubers, and while some of them have alright commentary, a lot of it seems not backed up by numbers at best. For example, I was watching a video talking about how leftist politics is incubating some growing movement of activists trying to get people to accept non-offending paedophiles coming out and seeking help in public, and praising them as virtuous for doing so. Examples of these things happening are given to prove they technically exist, but these things blow up in the news and get blown out of proportion. For all manner of reasons one might oppose a public policy of paedophiles seeking treatment in this way, and it’s easy to see how a virtual mass hysteria can be generated, and stereotypes perpetuated.
That there are a non-zero number of politically left-wing activists also part of this paedophilia acceptance(?) movement isn’t useful data about how big or imminent a threat it is, or what kind of threat it is so as to effectively address it. These moral panics seem pretty standard in the news cycle these days. It’s hard to tell which ones are actively the worst. The same sort of thing happens on the left. Some people on the left have been using the actions of the Westboro Baptist Church to stereotype all manner of American Christians for years. That’s despite the fact the Westboro Baptist Church is a small but loud organization which does all sorts of things all manner of Christian would reject anyway. I guess you could say, technically, this means there are a non-zero number of right-wing activists who are dangerously homophobic, but that hardly means there’s a threat of the tide turning against progressive popular opinions overnight. The same goes for these paedophilia activists.

We need to substantiate the claims of alleged threat with evidence, because if we end up very wrong, it’s just a waste of resources on top of everything else.

My Opinion on The Gender Wage Gap

A friend asked me my opinion on the gender wage gap. Below is my response.

Literally everybody who cites statistics not from social scientist on all sides tends to use crap numbers that miss the bigger picture. Lots of social scientists from all kinds of fields also have crappy numbers that mischaracterize reality. Bundling all types of jobs together when different types of careers and occupations are developed on the basis of all kinds of siffereny local or workplace cultures is nonsense. Basically, obviously there will be industries where controlling for everything else some systemic type of sexism is the remaining factor controlling for wage gaps or comparable metrics of equity like authority in the workplace. Unfortunately, this level of nuance is lost in the broader culture wars by virtually literally everyone who isn’t discussing all the above content in the context of research.

What anyone ought do in practice, on a piecemeal basis, identifying those industries wherein sexism is legitimately having the worst consequences where nobody has tried anything, for which it actually matters[1], and which something can be done about it without systemically changing the values of everyone in society, it should be done. This is started by reaching out to the people in and around those industries most likely in practice to be willing and able to do something about it. Lots of people these days say action isn’t worth it if it doesn’t result in systemic change. I’ve talked to all manner of people of every perspective on this issue, and from now angle is that position seen as sound by anyone else, so I reject it outright.

There are people who do urge for systemic change, and put their money where their mouth is. This is a difference between liberal, Marxian and radical variants of feminism I don’t know enough about to follow the state of discourse at this time, but I encourage its continuation. Of course, by this point I’m entailing commentary on the state of organization and discourse on the political left in general, which is a separate topic.

[1] I am not going to become upset about the inequality between men and women in the field of midwifery, and I’m not going to become upset the more men than women are firefighters when any firefighter regardless of self-identified sex or gender must be constantly meet some minimum threshold for fitness to be able to reliably and sufficiently perform their jobs in emergency situations, and it just so happens men are more likely to be able to run while lifting 300 lbs. people on their shoulders than women.

Why I Mix Humour With Seriousness

A friend recently made the following observation about me.

I think you pretend you’re joking a lot more often than you actually are. Most people use humor this way sometimes, but I think you’re doing it as a primary strategy.

Since they were the first person who cared enough to point this out, I not only confirmed my friend’s suspicion, but explained why I behave in this manner. It goes as follows.

 It’s true that I pass myself off as joking more often than I actually am. What’s going on is on the object-level of a certain discussion or framework, I’m being serious. However, on the meta-level, I don’t necessarily believe these things. That is, I take ideas seriously, without believing them myself. It’s my experience virtually no amount of me not saying I sincerely believe something, or me saying I sincerely disbelieve something, actually causes everyone to react in a manner with them consistent with them believing me to be an honest person.
(The perception) of threatening or harmful lying causes one to reliably lose status from everyone. Harmless or non-threatening lying is usually neutral regarding one’s status with most others, and may even increase status. Being upfront and explicit about this minimizes the probability of losing status. At least, that’s my theory. Anyway, joking is the more reliable way to explicitly signal the lie one is perpetuating is non-threatening or harmless. I also have a comparative advantage in it.
People tend not to trust people who are never sincere. If I’m always joking, people will think of me as never being sincere, which would be bad for me. So, the only way to plausibly maintain I’m joking about things at the rate I do without losing in the trade-off of people not wanting to interact with me when they want a sincere interlocutor is to not be sure if I myself am joking or not. It’s my impression people will end up finding out if one is lying is not. So, to be able to credibly claim I’m not sure if I’m being serious or not, I must actually not be able to tell if I’m being serious or not. So, I try not to take myself too seriously.
 Signaling that I don’t strongly hold beliefs that would pose a threat to any of you, other things being equal, minimizes the probability any of you would harass, offend, stigmatize, dislike, hate, me, etc.
 This is a habit I unthinkingly maintain across the whole internet, although if you’d prefer, I’ll make an effort to keep my guard down here, and not pretend to be joking at all when I myself am completely serious.
Another thing is I tend to mix humour and seriousness in much of what I write. I mean, sometimes I’m trying to be intentionally funny, and sometimes I find the most natural way to write on a subject in my own voice is one which, while making serious points, also comes across as playful and/or funny to myself or others. I work on multiple semantic layers. Like, the deeper structure of my opinion is that I obviously still think, all things considered, civnat is ambijectively superior to ethnat, or at least the arguments for ethnat I’ve seen so far aren’t sufficient to change my mind. The humour in my writing exists for social purposes, like indicating that just because I have some non-trivial (even if practically meaningless) disagreements with friends over politics doesn’t mean I think any less of them as people, or that I otherwise intend to treat them with less friendliness in future interactions.

On Modeling Myself

I tend to confuse people, or at least tend to give people one impression when what they think I’m doing, or what I’m actually doing, is something else. Lots of people in the rationality community try modeling each other, and themselves, and supposedly this helps us interact with one another. I have implicit models of people I work off of, which will probably form regardless of whether I want them to or not. I used to try acknowledging or noticing my models of other people to make them explicit, but I found this wasn’t useful. I’ve checked with others, and the perceptions or models others have of me are about what I expected. I think I could model myself as accurately as anyone else could model me, or as accurately as anyone else could model themselves, but I don’t know what I’d use such a model for, so I don’t make much of an effort to maintain one.

My whole life people have been impressing upon me the importance of self-esteem, or at least awareness of self-image. These ideas as atomic concepts haven’t been useful to me, so I’ve made an effort of dissolving these concepts, and then disregarding notions of self. That’s not to say I disregard myself, but that the best way for me to achieve non-attachment is to realize self-identity isn’t a fundamental part of my mind. Lots of people who know me would probably call me out as someone who doesn’t seem all that self-possessed, and so while I may aspire to ego death as much as anyone, I’m probably not great at achieving it.

My typical response is much like the following: while self-identify isn’t a fundamental part of my mind, it’s still a part of my mind. The only reason I’d dissolve the self is for my own purposes, or some purpose I see as greater than myself, and not subject to the mere whims of others. So, with regards to what ontology it can be said “Evan”-as-subject really, truly, objectively exists in, while I do care in the abstract, I don’t care enough in the context of any particular conversation to qualify it to others, who aren’t entitled to an opinion on the nature of my own consciousness.

That all stated, it’d seem mutually beneficial to myself and others for others to have a better model of me such that our interactions go more smoothly. So, this post will be the first in a series called “Explaining Evan”, in which I attempt to explain. from my own perspective, how and why I am the way I am, and why I behave in the manners I do.


Public Disclaimers Posts

I know lots of people who tend to write blog posts around common topics, themes, or subjects, in a serial or sequential manner.  This, then, is a series of posts constituting general public disclaimers with regards to my public and/or private involvement in a manner of activities specified below. This is the sort of behaviour which other people find odd. However, this post exists such that all public disclaimers I put on my blog are written to uphold my opinion assuming virtually the whole set of probable conversations or contexts I’d find myself in discussing the relevant subject matter. So, I save time by being able to link them to others instead of having to write up more considerations each time I talk to other people about things I care about. Additionally, I write such as it’s consistent with my personal interpretation of legalism. The links below constitute published disclaimers. This blog post will be updated as more posts in the series are published.

Paranoid Bayesian Legalist Disclaimer Regarding All Intents and Purposes of My Speech Acts Critically Targeted At What Others Might Characterize As My Self-Identified “Ingroup”

General Disclaimer Regarding Cryonics As Of May 2017

General Disclaimer Regarding Cryonics as of May 2017

Rationalists tend to defend their cryonics memberships on the case Eliezer Yudkowsky originally made several years ago. I tend to believe rationalists have better estimates and evidence for those original claims than is popularly thought. However, those original estimates ultimately explicitly depended on the solvency of cryonics organizations within the broader societal framework they were in (in practice, just in the United States). In the last several years, though, there’s been a state of disorganization in the transhumanism and cryonics communities such that I’m not able to determine what is quality information, and some information which isn’t out of question is also data claiming cryonics organizations like CI and Alcor Life Extension can’t be relied upon. So, I’m skeptical of, and conservative of committing resources to, community projects for cryonics that aren’t first committed to addressing such allegations, figuring out the truth, and finding a solution which satisfies everyone before moving forward.

As far as I can tell, this really hasn’t been addressed in the cryonics and transhumanism communities. I’ve seen some rationalists acknowledge this and even canceled their active cryonics memberships because of this information. Most rationalists I know haven’t updated on this information to the point they’ve canceled their cryonics subscriptions. However, it seems the rationality community is the only one which is tolerant enough of criticizing high-status ingroup establishments to the point people in said community feel comfortable bringing it up in the first place. Generally, something like cryonics demands at least the level of transparency/accountability the effective altruism and rationality community demands of their own flagship organizations, and this isn’t the case for the global community of cryonics subscribers.

So, I’m generally more in favour of and willing to commit resources to anti-ageing and longevity projects not dependent upon cryonics