Public Disclaimers Posts

I know lots of people who tend to write blog posts around common topics, themes, or subjects, in a serial or sequential manner.  This, then, is a series of posts constituting general public disclaimers with regards to my public and/or private involvement in a manner of activities specified below. This is the sort of behaviour which other people find odd. However, this post exists such that all public disclaimers I put on my blog are written to uphold my opinion assuming virtually the whole set of probable conversations or contexts I’d find myself in discussing the relevant subject matter. So, I save time by being able to link them to others instead of having to write up more considerations each time I talk to other people about things I care about. Additionally, I write such as it’s consistent with my personal interpretation of legalism. The links below constitute published disclaimers. This blog post will be updated as more posts in the series are published.

Paranoid Bayesian Legalist Disclaimer Regarding All Intents and Purposes of My Speech Acts Critically Targeted At What Others Might Characterize As My Self-Identified “Ingroup”

General Disclaimer Regarding Cryonics As Of May 2017

Advertisements

General Disclaimer Regarding Cryonics as of May 2017

Rationalists tend to defend their cryonics memberships on the case Eliezer Yudkowsky originally made several years ago. I tend to believe rationalists have better estimates and evidence for those original claims than is popularly thought. However, those original estimates ultimately explicitly depended on the solvency of cryonics organizations within the broader societal framework they were in (in practice, just in the United States). In the last several years, though, there’s been a state of disorganization in the transhumanism and cryonics communities such that I’m not able to determine what is quality information, and some information which isn’t out of question is also data claiming cryonics organizations like CI and Alcor Life Extension can’t be relied upon. So, I’m skeptical of, and conservative of committing resources to, community projects for cryonics that aren’t first committed to addressing such allegations, figuring out the truth, and finding a solution which satisfies everyone before moving forward.

As far as I can tell, this really hasn’t been addressed in the cryonics and transhumanism communities. I’ve seen some rationalists acknowledge this and even canceled their active cryonics memberships because of this information. Most rationalists I know haven’t updated on this information to the point they’ve canceled their cryonics subscriptions. However, it seems the rationality community is the only one which is tolerant enough of criticizing high-status ingroup establishments to the point people in said community feel comfortable bringing it up in the first place. Generally, something like cryonics demands at least the level of transparency/accountability the effective altruism and rationality community demands of their own flagship organizations, and this isn’t the case for the global community of cryonics subscribers.

So, I’m generally more in favour of and willing to commit resources to anti-ageing and longevity projects not dependent upon cryonics

Paranoid Bayesian Legalist Disclaimer Regarding All Intents and Purposes Of My Speech Acts Critically Targeted At What Others Might Characterize As My Self-Identified “Ingroup”

Summary: I to the best of my own knowledge fundamentally inoculate myself against hearsay for all the intents and purposes of any speech acts I (am to) make which are traceable back to my civilian identify such that anyone would uphold the indirect consequences of such speech acts as evidence that they were in violation of the law for blog posts written after the publication of this blog post as they relate to subject matter and content relevant to what people who think of themselves as my “ingroup” call our “ingroup”
 
I’m going to be writing up some thoughts on epistemology and community norms as it relates to effective altruism and rationality which may possibly though by no means necessarily be critical of the practices of most if not virtually literally all individuals involved in them, and the same for all adjacent communities, and persons who in practice associate with persons self-identified with all the above identified communities. They, and if you’re still reading this post, let’s be honest, probably you, are the sort of person in their walks of life who more or less signals valuing honesty as epistemic integrity and humility, all other considerations being equal.
 
[tl;dr necessary paranoid Bayesian legalist meta-disclaimer qualifying as ontologically fundamental to any worldview I’d be forced to defend as my true beliefs in a court of law my actual behaviour for any given time occurring after the writing of this post. Feel free to skip.
 
Regardless of however consistently honest you yourself/yourselves are as (an) individual(s) in upholding your values with integrity and fidelity, I think we can all agree by all legitimate lights I am and will probably continue to be the sort of person who virtually everyone will respect to the point they wouldn’t condone the violation of my civil rights at the hands of the state as per the letter and spirit of the law of the jurisdiction which I am currently or will be residing in, which for the foreseeable future will for all intents only include nation-states commonly referred to, within their own ultimate sphere(s) of influence, as “the free world”.
 
The above paragraph includes what constitutes a disclaimer which for virtually all intents and purposes I know of will allow me to hold liable in court those persons who violate my rights on the grounds of retaliation for me saying something which could, in a court of law, feasibly and plausibly be upheld as having a non-zero probability of causing a non-zero amount of offense to the client of the defense or the plaintiff in question. If indeed this blog post would or could be admitted as evidence in justification of either the defense of my own person or my initiation of a case in a court of law on the basis of my public statements, let the record show I am here and now publicly committing to making those statements which are only intended to improve the community, and not cause real harm to any persons.]
 
If you didn’t read the “paranoid Bayesian legalist meta-disclaimer”, what I was going for was that given that we’re part of the sort of community which not only values honesty but also mutual improvement through mutual constructive criticism, the only way I can be maximally honest while being maximally constructive in my criticism of the ingroup is to write in a manner in which I’m most comfortable. This includes how I think and speak to the hordes of ingrates[1] in any of the Facebook pages I administrate; how I talk out loud given implicit assumptions of being maximally “off the record”; and how I think inside my own mind. This is in a colloquial manner that I expect in reality will cause a non-zero amount of offence to the sensibilities of people I know. Given we live in a world which on the best days of the worst-off person they can expect to have their life ruined more than they ever thought, and in this topsy-turvy world there aren’t any surefire guarantees any of us ourselves won’t one day be in that situation, with all the political correctness we have in the world these days on top of that, who knows what anyone might say which could ultimately be traced back to them which would be construed as illegitimate or illegal speech undoubtedly intended to directly incite hatred and violence. Given the unpredictability of what sorts of speech (acts) will or won’t hold up in a court of law if construed as such for the indefinite future, I’ve seen the need to inoculate myself against allegations possibly any and all retaliations against my person up to and possibly though not necessarily including anything which could be construed as a violation of my negative human rights are, were, will be, or will have been justified on any grounds I merely hurt another person’s feelings.
 
[1] If you’re still reading it, you’re probably included in (one of) the group(s) of people I just referred to as part of my “hordes”. If you read the rest of my blog post, you’ll discover why I’m comfortable being the sort of person comfortable with referring to you as part of my hordes despite all objections I expect you yourself could plausibly credibly generate.

How to Get My Attention: Go From Interesting Ideas to Project Proposals

I guess I’m a creative guy, and I appreciate that people come to me with their novel theories about how effective altruism really functions, or maybe ought to function in the future. However, there’s enough ideas I could pursue, because so many people come to me, I don’t have time to pursue them all. So, ultimately, it’s worth my time if people are so confident in these ideas they’re willing to pursue the projects and make something concrete out of their impetus to change society themselves. If they’re so compelled, and what they believe is true, then I’ve got to find evidence of why it’s important enough that I should get excited and pursue it too. That’s how I think. Whenever a question is posed, or a problem exposed, in effective altruism, my ultimate question is to determine if what you’re talking about matters more than anything else. Because that’s already what’s at stake for so many things people are already doing in the effective altruism community.

Comedians Are Also Responsible for the Infotainment Crisis

If people like John Oliver and Jon Stewart were not only going to benefit from the low quality of cable news but lean into it by making satirical news programs based on some amount of authentic research, they could at least have been up front about it. If they’re more like real news than what we call the news, they’re basically news programs too. But because of how cable ratings work, these corporate conglomerates which own everything from informative programs to pure entertainment don’t peg any nuance on why people are watching programs. They’re only tracking how many people are watching the program.
 
So, to compete with one another, the quality of comedy shows becomes more like that of news programs, and the quality of news programs has become more like that of comedy/variety/talk shows. By not acknowledging that for business-related reasons their shows are doing real if not legitimate journalism, these comedians as writers and producers preclude themselves from having to be held to the standards the public holds other types news of media to. However, if we for reasons of prestige don’t acknowledge the obvious reality, that these shows function as a source of news for millions of people, then that leaves the quality news programs in a sector where they’re not just competing against low-quality news, but all manner of broadcast video entertainment as well. When all shows which talk about current events are optimized for how enjoyable rather than how informative they are, the shows which were trying to be truly informative lose their competitive edge. They become neglected and irrelevant.
 
The Venn Diagram of “information” and “entertainment” for non-fictional video media is now a circle. There is only fake news, i.e., the normal media, pretending to be real news, and real news, i.e., facts, pretending to be fake news, i.e., presented in a manner optimizing for entertainment instead of quality. What’s more is because there is no standard of credibility any more is that the glow of respectability from teams like those of John Oliver and Jon Stewart which have actually on occasion done excellent coverage of current events extends to other celebrities. Any famous person is now as equally entitled to an opinion on politics or culture as any other. Credentials don’t include a history of experience, association with any particular type of institution, or educational background. What news media qualifies as what opinions to share not unlike an editorial are entirely based on how popular a person is on a given day. And that’s it.
 
Now you’ve got a half dozen shows doing all the same things, but they just consolidate liberal biases in the eyes of millions of people who’d be better off in a world where investigative journalism like 60 Minutes tailored to match the tastes of young people existed. All these other shows suck way more than what Jon Stewart was doing. Trevor Noah is not nearly as good a host as Jon Stewart on any dimension I or others seem to care about.
 
This all coalesces in how Hollywood celebrities as a cabal functioned as more of a propaganda machine for a political candidate not themselves from the arts establishment more than any time in recent memory: Hillary Clinton in 2016. There are other major factors which play into the ugly, amorphous blob that just is all infotainment and celebrity culture, like social media. But I’ve read articles about those. I’ve not seen anyone acknowledge the unique role comedians like Jon Stewart played in shifting the political climate and the nature of public discourse in the contemporary Anglosphere. There’s the one thing Jon Stewart was keeping in mind as a goal even as his other goals were noble, that he didn’t disclose. Because he’s so shaped and influenced our expectations of what news ought to be like, he should acknowledge what corporate media ownership forces him to do. Ultimately, more than to either inform or entertain, The Daily Show and all shows on TV like it have the goal of earning money for their producers and advertisers. After all, it’s show business.

No Positive Reason to Suspect OpenPhil of Wrongdoing

I’m not too concerned with OpenPhil’s grant to OpenAI. I don’t expect the potential conflicts of interest amount to anything. I think there’s a perverse outcome where everyone is self-conscious except OpenPhil/Givewell in this regard. Like, had they not disclosed potential conflicts of interest, nobody would’ve noticed. Relationships existing between two different teams in effective altruism in a professional capacity when there are close mutual friends or spouses working at either organization has been something which has happened in EA for years. It happens in every community in every culture on Earth. It’s not always nepotism.
 
Humans are going to be human. If we want to act surprised people will be drawn and form close bonds with people who share their attitudes, interests and values; whom they spend lots of time with; and whom they’re forced to be exposed to in the course of their daily lives, we’re phoning it in. That’s not to say the questions are worth asking in the first place.
 
What I’m saying is that if it’s something we’re hemming and hawing over when effective altruism is already a community where I expect its members will ultimately tread lightly and not be outrageously disrespectful, we might as well be candid about our concerns. There’s no point being polite for politeness’ sake alone if what you’re doing is concern-trolling or passively-aggressively expecting some subtext to be noticed.
 
The culture of EA is sociopolitically dominated by secular liberalism. I think there’s this self-consciousness in that mindset where the ideal is for equality of opportunity, which we hope leads to equality of outcome, but often doesn’t. When things don’t line up the way we hoped for despite everyone’s best efforts to set up an (as close as was realistic to) objectively fair system, we’re apprehensive. We fear the problem might be us, and that we’re unable to live up to our ideals.
 
I don’t suspect Holden, or anyone at OpenAI or OpenPhil, is culpable in nepotism, implicit cognitive favouritism, of failing their responsibility to try in earnest to do the most good. I think there’s a lot about this grant OpenPhil isn’t disclosing, and it’s odd. I think perhaps they’ve made poor judgement calls or errors in reasoning we can sensibly disagree over. I think the apparent gaps in what OpenPhil’s doing here with such a grant to OpenAI may be filled with concerns over misdeed, but what real mistakes are there are of the far simpler sort everyone makes.

Response to Community-Building in Effective Altruism

The way I wrote it above was strongly worded for effective altruists do gain a background. I think immersing oneself in the more substantive, well laid-out, intentional blog posts is important. My prior comment made it sound like this applies to conversations on Facebook. One problem is some major historical discussions in effective altruism happen in groups like this. But those aren’t well-tracked, and nobody just copies and collects the hyperlinks for reading at a later date.

So what we’re is to follow the important Facebook posts all the time from the right people and the right groups to learn about positions which become tacit common knowledge as time goes on. It’s not just that we’re saying “pay more attention on Facebook”, but it’s like sorting through a puzzle to figure out what sources of information are considered acceptable or not. Effective altruism is an intricate network, and that some people have formed personal relationships over the years of social context making the network more intimate and not outgoing or attractive to newcomers can make entering some sort of “in-crowd” in effective altruism intimidating.

This is a problem Brian Tomasik has talked about in the past in “Why Make Conversations Public”. I think the long-time community members have institutional or community privilege in having the historical advantage of our experience in the community. We’re taking for granted everyone ought to know what we think are the best ideas now. If I think about this a bit though, I can empathize with those who find this attitude somewhat arrogant. These implicit expectations altogether can be intimidating, and can make gaining social traction in the effective altruism community unwelcoming. Like, moral excitement is touted and courted as a motivation for doing the most good, but people who get excited by EA and try to enter get shut down.

 

I think this is a problem that exists online, and if one can join a strong in-person community people form bonds which make them more welcoming to newcomers. While this solves the problem of joining the community for some, it can create a problem for others. Places like the San Francisco Bay Area or Metro London are expensive to live in, and in other ways, the difficulty of moving to these places isn’t publicly acknowledged even if it’s empathized with. I don’t know what percentage of effective altruists feel this way, so I don’t know the true scope of this issue, but I’ve been hearing anecdotes of a gap which generates a dissuasive sentiment for years. I know correcting these sorts of problems is hit and miss for the rationality community, but I know they have a record of trying to debug these problems with mixed success. I guess trying to find some best practices and accelerate the rate at which bugs in community expansion with community cohesion intact are being fixed is what Raymond Arnold is doing with his Project Hufflepuff.

I think if long-time members of the community like myself and others are going to gripe about people not catching up to speed fast enough, or not closing all their procedural knowledge gaps fast enough, have a responsibility to also make inroads to the community more welcoming. This is the sort of thing my friends in the closely knit Seattle Rationality/Effective Altruism community have been thinking about lately.   

I think people from some smaller geographic communities can feel more resentful, but those aren’t the feelings they’d defend. Really what’s the most damaging part isn’t so much a brain drain as it is community leaders form connections with organizations in the major hubs (e.g., Oxford, SF, Boston?), but this leaves a leadership vacuum.
Cultivating a culture of welcomingness and finding ways to socially and culturally invest in local communities all over the place are hard problems to solve. I think a start though would be for the EA Handbook to be updated, spread around or promoted at the level ‘Doing Good Bettter’ gets promoted at, and for there to also be a community organizer handbook written in chapters for tips from various local organizers around the world, as opposed to something centrally written by a single organization like LEAN or CEA. I may pursue online coordination on this sort of project with Project Hufflepuff, the Accelerator Project, Leverage Research, CFAR, CEA, LEAN, or Sentience Politics/EAF, or other groups.

The Ontology of Dank EA Memes

References to phenomena in Dank EA Memes are related to Dank EA Memes. As a significant forum for discourse in effective altruism, meta-level references to dank EA memes and events in the group itself ontologically share a direct relation to effective altruism. According to Yudmowski’s Law of Dankfinite Recursion, memes retain their relevance to effective altruism within three-degrees of an object-level EA topic. Therefore, this meme is only one to two degrees removed. Therefore, it’s allowed.

A meme referencing this post would be two to three degrees removed except this post is now a phenomenon within DEAM itself and in being referenced without the hypothetical meta-level reference being self-referential would now qualify as an object-level EA phenomenon.

Metacontrarianism in 2017

[epistemic status: flying by the seat of my pants. Just wingin’ it.]

We’ve passed through another cycle of rationalist dialectics. Bashing the centre, what we call “liberals”, which also includes some people from the progressive centre-left and conservative centre-right, has been in vogue. The early signs of it started with Brexit. That’s when it was truly contrarian, and the last time it was original.
 
After Trump’s election, a lot of people jumped on the bandwagon. That’s when it was metacontrarian. You were neither a Clinton, Trump or Sanders supporter who didn’t register a prediction before the election and who followed along with all the polls, but right after you mention all your misgivings about everything the whole time. That’s jumping on a bandwagon if I’ve ever seen it. Well, we all did that at least a little bit. If you’ve been criticizing centrists for their naivete after the election. If it took you until the election to notice it, you were naive too. I know I was. C’mon, it’s okay to admit you were signaling if we all did it. I won’t tell Robin Hanson if you won’t.
 
Anyway, bashing centrists is now standard again. It’s filtered through Dank EA Memes now. It’s like, “haha, liberals want us to still take them seriously when they’re so out of touch!” is a meme. And it’s been a meme for months. It’s a tired meme. I’m pretty sure teenagers who don’t know what the Washington Post or the New York Times are are now mocking WaPo or NYT. Even calling out Vox for all its libsplaining is really old hat.
 
So if you want to look cool and edgy in Q2 of 2016 by having a hedgehox political prophecy before anyone else, we need to figure out the next metacontrarian position.
 
The next metacontrarian position isn’t about politics. It’s about journalism and the media. The standard position is hating the media, which is also the correct position. The contrarian position is being someone who feels like they need to get any decent news somewhere, because they’re worried under the Trump presidency for their well-being and need to know what might change, and NYT or WaPo are the place to get it. I heard somewhere paid subscriptions of those news publications and the Boston Globe have been way up since Trump’s election. For the record, I take no position on the legitimacy of NYT or the Boston Globe as a source of good journalism. WaPo threw Ed Snowden under the bus last year, so to Hell with them.
 
Anyway, the metacontrarian position on journalism in 2017 is: 

*drum roll*

journalism just is factionalized and partisan politics! Journalism is just about tribal fighting, and that the Fourth Estate no longer even exists…? Yeah, basically some position not lamenting the downfall of journalism, but some long-winded theory about how journalism fell decades ago and you’re only noticing it now because it was you, the news-reading public, who drove journalism downhill with your consumerist hunger for infotainment! Yeah, that sounds right. If you write something that reads like a LessWrong post, but can be reduced to “political theatre is written like a literal soap opera”, you’re pretty metacontrarian.

Now we just need some edgy up-and-coming blogger to write it up. No, it won’t be me. Even I don’t care that much about coming up with wacky inside views nobody else would see coming. At least for politics. I mean, people should be putting that effort into finding Cause X. Actually, we don’t actually use metacontrarianism to search for Cause X. We should try that. Like, just take two causes, and rationalize some wacky hybrid out of thin air like the two causes were two chunks of Play-doh. The results would be the Cronenbergs of cause prioritization.

Anyway, everyone on Rationalist Tumblr should write their crackpot theory about why journalism became so awful, and then we can adopt them to be cool and edgy again by fusing bullshit with other bullshit to make some extra-deluxe platinum-coated bullshit that still doesn’t match the map to the territory. One of you might also turn out right, so if you are, you can be the next Scott Adams, i.e., “Dilbert guy who called Trump a ‘wizard’ on his blog”.