I know lots of people who tend to write blog posts around common topics, themes, or subjects, in a serial or sequential manner. This, then, is a series of posts constituting general public disclaimers with regards to my public and/or private involvement in a manner of activities specified below. This is the sort of behaviour which other people find odd. However, this post exists such that all public disclaimers I put on my blog are written to uphold my opinion assuming virtually the whole set of probable conversations or contexts I’d find myself in discussing the relevant subject matter. So, I save time by being able to link them to others instead of having to write up more considerations each time I talk to other people about things I care about. Additionally, I write such as it’s consistent with my personal interpretation of legalism. The links below constitute published disclaimers. This blog post will be updated as more posts in the series are published.
Rationalists tend to defend their cryonics memberships on the case Eliezer Yudkowsky originally made several years ago. I tend to believe rationalists have better estimates and evidence for those original claims than is popularly thought. However, those original estimates ultimately explicitly depended on the solvency of cryonics organizations within the broader societal framework they were in (in practice, just in the United States). In the last several years, though, there’s been a state of disorganization in the transhumanism and cryonics communities such that I’m not able to determine what is quality information, and some information which isn’t out of question is also data claiming cryonics organizations like CI and Alcor Life Extension can’t be relied upon. So, I’m skeptical of, and conservative of committing resources to, community projects for cryonics that aren’t first committed to addressing such allegations, figuring out the truth, and finding a solution which satisfies everyone before moving forward.
As far as I can tell, this really hasn’t been addressed in the cryonics and transhumanism communities. I’ve seen some rationalists acknowledge this and even canceled their active cryonics memberships because of this information. Most rationalists I know haven’t updated on this information to the point they’ve canceled their cryonics subscriptions. However, it seems the rationality community is the only one which is tolerant enough of criticizing high-status ingroup establishments to the point people in said community feel comfortable bringing it up in the first place. Generally, something like cryonics demands at least the level of transparency/accountability the effective altruism and rationality community demands of their own flagship organizations, and this isn’t the case for the global community of cryonics subscribers.
So, I’m generally more in favour of and willing to commit resources to anti-ageing and longevity projects not dependent upon cryonics
I guess I’m a creative guy, and I appreciate that people come to me with their novel theories about how effective altruism really functions, or maybe ought to function in the future. However, there’s enough ideas I could pursue, because so many people come to me, I don’t have time to pursue them all. So, ultimately, it’s worth my time if people are so confident in these ideas they’re willing to pursue the projects and make something concrete out of their impetus to change society themselves. If they’re so compelled, and what they believe is true, then I’ve got to find evidence of why it’s important enough that I should get excited and pursue it too. That’s how I think. Whenever a question is posed, or a problem exposed, in effective altruism, my ultimate question is to determine if what you’re talking about matters more than anything else. Because that’s already what’s at stake for so many things people are already doing in the effective altruism community.
The way I wrote it above was strongly worded for effective altruists do gain a background. I think immersing oneself in the more substantive, well laid-out, intentional blog posts is important. My prior comment made it sound like this applies to conversations on Facebook. One problem is some major historical discussions in effective altruism happen in groups like this. But those aren’t well-tracked, and nobody just copies and collects the hyperlinks for reading at a later date.
So what we’re is to follow the important Facebook posts all the time from the right people and the right groups to learn about positions which become tacit common knowledge as time goes on. It’s not just that we’re saying “pay more attention on Facebook”, but it’s like sorting through a puzzle to figure out what sources of information are considered acceptable or not. Effective altruism is an intricate network, and that some people have formed personal relationships over the years of social context making the network more intimate and not outgoing or attractive to newcomers can make entering some sort of “in-crowd” in effective altruism intimidating.
This is a problem Brian Tomasik has talked about in the past in “Why Make Conversations Public”. I think the long-time community members have institutional or community privilege in having the historical advantage of our experience in the community. We’re taking for granted everyone ought to know what we think are the best ideas now. If I think about this a bit though, I can empathize with those who find this attitude somewhat arrogant. These implicit expectations altogether can be intimidating, and can make gaining social traction in the effective altruism community unwelcoming. Like, moral excitement is touted and courted as a motivation for doing the most good, but people who get excited by EA and try to enter get shut down.
I think this is a problem that exists online, and if one can join a strong in-person community people form bonds which make them more welcoming to newcomers. While this solves the problem of joining the community for some, it can create a problem for others. Places like the San Francisco Bay Area or Metro London are expensive to live in, and in other ways, the difficulty of moving to these places isn’t publicly acknowledged even if it’s empathized with. I don’t know what percentage of effective altruists feel this way, so I don’t know the true scope of this issue, but I’ve been hearing anecdotes of a gap which generates a dissuasive sentiment for years. I know correcting these sorts of problems is hit and miss for the rationality community, but I know they have a record of trying to debug these problems with mixed success. I guess trying to find some best practices and accelerate the rate at which bugs in community expansion with community cohesion intact are being fixed is what Raymond Arnold is doing with his Project Hufflepuff.
I think if long-time members of the community like myself and others are going to gripe about people not catching up to speed fast enough, or not closing all their procedural knowledge gaps fast enough, have a responsibility to also make inroads to the community more welcoming. This is the sort of thing my friends in the closely knit Seattle Rationality/Effective Altruism community have been thinking about lately.
I think people from some smaller geographic communities can feel more resentful, but those aren’t the feelings they’d defend. Really what’s the most damaging part isn’t so much a brain drain as it is community leaders form connections with organizations in the major hubs (e.g., Oxford, SF, Boston?), but this leaves a leadership vacuum.
Cultivating a culture of welcomingness and finding ways to socially and culturally invest in local communities all over the place are hard problems to solve. I think a start though would be for the EA Handbook to be updated, spread around or promoted at the level ‘Doing Good Bettter’ gets promoted at, and for there to also be a community organizer handbook written in chapters for tips from various local organizers around the world, as opposed to something centrally written by a single organization like LEAN or CEA. I may pursue online coordination on this sort of project with Project Hufflepuff, the Accelerator Project, Leverage Research, CFAR, CEA, LEAN, or Sentience Politics/EAF, or other groups.
References to phenomena in Dank EA Memes are related to Dank EA Memes. As a significant forum for discourse in effective altruism, meta-level references to dank EA memes and events in the group itself ontologically share a direct relation to effective altruism. According to Yudmowski’s Law of Dankfinite Recursion, memes retain their relevance to effective altruism within three-degrees of an object-level EA topic. Therefore, this meme is only one to two degrees removed. Therefore, it’s allowed.
A meme referencing this post would be two to three degrees removed except this post is now a phenomenon within DEAM itself and in being referenced without the hypothetical meta-level reference being self-referential would now qualify as an object-level EA phenomenon.
[epistemic status: flying by the seat of my pants. Just wingin’ it.]
journalism just is factionalized and partisan politics! Journalism is just about tribal fighting, and that the Fourth Estate no longer even exists…? Yeah, basically some position not lamenting the downfall of journalism, but some long-winded theory about how journalism fell decades ago and you’re only noticing it now because it was you, the news-reading public, who drove journalism downhill with your consumerist hunger for infotainment! Yeah, that sounds right. If you write something that reads like a LessWrong post, but can be reduced to “political theatre is written like a literal soap opera”, you’re pretty metacontrarian.
Now we just need some edgy up-and-coming blogger to write it up. No, it won’t be me. Even I don’t care that much about coming up with wacky inside views nobody else would see coming. At least for politics. I mean, people should be putting that effort into finding Cause X. Actually, we don’t actually use metacontrarianism to search for Cause X. We should try that. Like, just take two causes, and rationalize some wacky hybrid out of thin air like the two causes were two chunks of Play-doh. The results would be the Cronenbergs of cause prioritization.
Anyway, everyone on Rationalist Tumblr should write their crackpot theory about why journalism became so awful, and then we can adopt them to be cool and edgy again by fusing bullshit with other bullshit to make some extra-deluxe platinum-coated bullshit that still doesn’t match the map to the territory. One of you might also turn out right, so if you are, you can be the next Scott Adams, i.e., “Dilbert guy who called Trump a ‘wizard’ on his blog”.