Why I Mix Humour With Seriousness

A friend recently made the following observation about me.

I think you pretend you’re joking a lot more often than you actually are. Most people use humor this way sometimes, but I think you’re doing it as a primary strategy.

Since they were the first person who cared enough to point this out, I not only confirmed my friend’s suspicion, but explained why I behave in this manner. It goes as follows.

 It’s true that I pass myself off as joking more often than I actually am. What’s going on is on the object-level of a certain discussion or framework, I’m being serious. However, on the meta-level, I don’t necessarily believe these things. That is, I take ideas seriously, without believing them myself. It’s my experience virtually no amount of me not saying I sincerely believe something, or me saying I sincerely disbelieve something, actually causes everyone to react in a manner with them consistent with them believing me to be an honest person.
(The perception) of threatening or harmful lying causes one to reliably lose status from everyone. Harmless or non-threatening lying is usually neutral regarding one’s status with most others, and may even increase status. Being upfront and explicit about this minimizes the probability of losing status. At least, that’s my theory. Anyway, joking is the more reliable way to explicitly signal the lie one is perpetuating is non-threatening or harmless. I also have a comparative advantage in it.
People tend not to trust people who are never sincere. If I’m always joking, people will think of me as never being sincere, which would be bad for me. So, the only way to plausibly maintain I’m joking about things at the rate I do without losing in the trade-off of people not wanting to interact with me when they want a sincere interlocutor is to not be sure if I myself am joking or not. It’s my impression people will end up finding out if one is lying is not. So, to be able to credibly claim I’m not sure if I’m being serious or not, I must actually not be able to tell if I’m being serious or not. So, I try not to take myself too seriously.
 Signaling that I don’t strongly hold beliefs that would pose a threat to any of you, other things being equal, minimizes the probability any of you would harass, offend, stigmatize, dislike, hate, me, etc.
 This is a habit I unthinkingly maintain across the whole internet, although if you’d prefer, I’ll make an effort to keep my guard down here, and not pretend to be joking at all when I myself am completely serious.
Another thing is I tend to mix humour and seriousness in much of what I write. I mean, sometimes I’m trying to be intentionally funny, and sometimes I find the most natural way to write on a subject in my own voice is one which, while making serious points, also comes across as playful and/or funny to myself or others. I work on multiple semantic layers. Like, the deeper structure of my opinion is that I obviously still think, all things considered, civnat is ambijectively superior to ethnat, or at least the arguments for ethnat I’ve seen so far aren’t sufficient to change my mind. The humour in my writing exists for social purposes, like indicating that just because I have some non-trivial (even if practically meaningless) disagreements with friends over politics doesn’t mean I think any less of them as people, or that I otherwise intend to treat them with less friendliness in future interactions.

On Modeling Myself

I tend to confuse people, or at least tend to give people one impression when what they think I’m doing, or what I’m actually doing, is something else. Lots of people in the rationality community try modeling each other, and themselves, and supposedly this helps us interact with one another. I have implicit models of people I work off of, which will probably form regardless of whether I want them to or not. I used to try acknowledging or noticing my models of other people to make them explicit, but I found this wasn’t useful. I’ve checked with others, and the perceptions or models others have of me are about what I expected. I think I could model myself as accurately as anyone else could model me, or as accurately as anyone else could model themselves, but I don’t know what I’d use such a model for, so I don’t make much of an effort to maintain one.

My whole life people have been impressing upon me the importance of self-esteem, or at least awareness of self-image. These ideas as atomic concepts haven’t been useful to me, so I’ve made an effort of dissolving these concepts, and then disregarding notions of self. That’s not to say I disregard myself, but that the best way for me to achieve non-attachment is to realize self-identity isn’t a fundamental part of my mind. Lots of people who know me would probably call me out as someone who doesn’t seem all that self-possessed, and so while I may aspire to ego death as much as anyone, I’m probably not great at achieving it.

My typical response is much like the following: while self-identify isn’t a fundamental part of my mind, it’s still a part of my mind. The only reason I’d dissolve the self is for my own purposes, or some purpose I see as greater than myself, and not subject to the mere whims of others. So, with regards to what ontology it can be said “Evan”-as-subject really, truly, objectively exists in, while I do care in the abstract, I don’t care enough in the context of any particular conversation to qualify it to others, who aren’t entitled to an opinion on the nature of my own consciousness.

That all stated, it’d seem mutually beneficial to myself and others for others to have a better model of me such that our interactions go more smoothly. So, this post will be the first in a series called “Explaining Evan”, in which I attempt to explain. from my own perspective, how and why I am the way I am, and why I behave in the manners I do.

 

Public Disclaimers Posts

I know lots of people who tend to write blog posts around common topics, themes, or subjects, in a serial or sequential manner.  This, then, is a series of posts constituting general public disclaimers with regards to my public and/or private involvement in a manner of activities specified below. This is the sort of behaviour which other people find odd. However, this post exists such that all public disclaimers I put on my blog are written to uphold my opinion assuming virtually the whole set of probable conversations or contexts I’d find myself in discussing the relevant subject matter. So, I save time by being able to link them to others instead of having to write up more considerations each time I talk to other people about things I care about. Additionally, I write such as it’s consistent with my personal interpretation of legalism. The links below constitute published disclaimers. This blog post will be updated as more posts in the series are published.

Paranoid Bayesian Legalist Disclaimer Regarding All Intents and Purposes of My Speech Acts Critically Targeted At What Others Might Characterize As My Self-Identified “Ingroup”

General Disclaimer Regarding Cryonics As Of May 2017

General Disclaimer Regarding Cryonics as of May 2017

Rationalists tend to defend their cryonics memberships on the case Eliezer Yudkowsky originally made several years ago. I tend to believe rationalists have better estimates and evidence for those original claims than is popularly thought. However, those original estimates ultimately explicitly depended on the solvency of cryonics organizations within the broader societal framework they were in (in practice, just in the United States). In the last several years, though, there’s been a state of disorganization in the transhumanism and cryonics communities such that I’m not able to determine what is quality information, and some information which isn’t out of question is also data claiming cryonics organizations like CI and Alcor Life Extension can’t be relied upon. So, I’m skeptical of, and conservative of committing resources to, community projects for cryonics that aren’t first committed to addressing such allegations, figuring out the truth, and finding a solution which satisfies everyone before moving forward.

As far as I can tell, this really hasn’t been addressed in the cryonics and transhumanism communities. I’ve seen some rationalists acknowledge this and even canceled their active cryonics memberships because of this information. Most rationalists I know haven’t updated on this information to the point they’ve canceled their cryonics subscriptions. However, it seems the rationality community is the only one which is tolerant enough of criticizing high-status ingroup establishments to the point people in said community feel comfortable bringing it up in the first place. Generally, something like cryonics demands at least the level of transparency/accountability the effective altruism and rationality community demands of their own flagship organizations, and this isn’t the case for the global community of cryonics subscribers.

So, I’m generally more in favour of and willing to commit resources to anti-ageing and longevity projects not dependent upon cryonics

Paranoid Bayesian Legalist Disclaimer Regarding All Intents and Purposes Of My Speech Acts Critically Targeted At What Others Might Characterize As My Self-Identified “Ingroup”

Summary: I to the best of my own knowledge fundamentally inoculate myself against hearsay for all the intents and purposes of any speech acts I (am to) make which are traceable back to my civilian identify such that anyone would uphold the indirect consequences of such speech acts as evidence that they were in violation of the law for blog posts written after the publication of this blog post as they relate to subject matter and content relevant to what people who think of themselves as my “ingroup” call our “ingroup”
 
I’m going to be writing up some thoughts on epistemology and community norms as it relates to effective altruism and rationality which may possibly though by no means necessarily be critical of the practices of most if not virtually literally all individuals involved in them, and the same for all adjacent communities, and persons who in practice associate with persons self-identified with all the above identified communities. They, and if you’re still reading this post, let’s be honest, probably you, are the sort of person in their walks of life who more or less signals valuing honesty as epistemic integrity and humility, all other considerations being equal.
 
[tl;dr necessary paranoid Bayesian legalist meta-disclaimer qualifying as ontologically fundamental to any worldview I’d be forced to defend as my true beliefs in a court of law my actual behaviour for any given time occurring after the writing of this post. Feel free to skip.
 
Regardless of however consistently honest you yourself/yourselves are as (an) individual(s) in upholding your values with integrity and fidelity, I think we can all agree by all legitimate lights I am and will probably continue to be the sort of person who virtually everyone will respect to the point they wouldn’t condone the violation of my civil rights at the hands of the state as per the letter and spirit of the law of the jurisdiction which I am currently or will be residing in, which for the foreseeable future will for all intents only include nation-states commonly referred to, within their own ultimate sphere(s) of influence, as “the free world”.
 
The above paragraph includes what constitutes a disclaimer which for virtually all intents and purposes I know of will allow me to hold liable in court those persons who violate my rights on the grounds of retaliation for me saying something which could, in a court of law, feasibly and plausibly be upheld as having a non-zero probability of causing a non-zero amount of offense to the client of the defense or the plaintiff in question. If indeed this blog post would or could be admitted as evidence in justification of either the defense of my own person or my initiation of a case in a court of law on the basis of my public statements, let the record show I am here and now publicly committing to making those statements which are only intended to improve the community, and not cause real harm to any persons.]
 
If you didn’t read the “paranoid Bayesian legalist meta-disclaimer”, what I was going for was that given that we’re part of the sort of community which not only values honesty but also mutual improvement through mutual constructive criticism, the only way I can be maximally honest while being maximally constructive in my criticism of the ingroup is to write in a manner in which I’m most comfortable. This includes how I think and speak to the hordes of ingrates[1] in any of the Facebook pages I administrate; how I talk out loud given implicit assumptions of being maximally “off the record”; and how I think inside my own mind. This is in a colloquial manner that I expect in reality will cause a non-zero amount of offence to the sensibilities of people I know. Given we live in a world which on the best days of the worst-off person they can expect to have their life ruined more than they ever thought, and in this topsy-turvy world there aren’t any surefire guarantees any of us ourselves won’t one day be in that situation, with all the political correctness we have in the world these days on top of that, who knows what anyone might say which could ultimately be traced back to them which would be construed as illegitimate or illegal speech undoubtedly intended to directly incite hatred and violence. Given the unpredictability of what sorts of speech (acts) will or won’t hold up in a court of law if construed as such for the indefinite future, I’ve seen the need to inoculate myself against allegations possibly any and all retaliations against my person up to and possibly though not necessarily including anything which could be construed as a violation of my negative human rights are, were, will be, or will have been justified on any grounds I merely hurt another person’s feelings.
 
[1] If you’re still reading it, you’re probably included in (one of) the group(s) of people I just referred to as part of my “hordes”. If you read the rest of my blog post, you’ll discover why I’m comfortable being the sort of person comfortable with referring to you as part of my hordes despite all objections I expect you yourself could plausibly credibly generate.

How to Get My Attention: Go From Interesting Ideas to Project Proposals

I guess I’m a creative guy, and I appreciate that people come to me with their novel theories about how effective altruism really functions, or maybe ought to function in the future. However, there’s enough ideas I could pursue, because so many people come to me, I don’t have time to pursue them all. So, ultimately, it’s worth my time if people are so confident in these ideas they’re willing to pursue the projects and make something concrete out of their impetus to change society themselves. If they’re so compelled, and what they believe is true, then I’ve got to find evidence of why it’s important enough that I should get excited and pursue it too. That’s how I think. Whenever a question is posed, or a problem exposed, in effective altruism, my ultimate question is to determine if what you’re talking about matters more than anything else. Because that’s already what’s at stake for so many things people are already doing in the effective altruism community.

Comedians Are Also Responsible for the Infotainment Crisis

If people like John Oliver and Jon Stewart were not only going to benefit from the low quality of cable news but lean into it by making satirical news programs based on some amount of authentic research, they could at least have been up front about it. If they’re more like real news than what we call the news, they’re basically news programs too. But because of how cable ratings work, these corporate conglomerates which own everything from informative programs to pure entertainment don’t peg any nuance on why people are watching programs. They’re only tracking how many people are watching the program.
 
So, to compete with one another, the quality of comedy shows becomes more like that of news programs, and the quality of news programs has become more like that of comedy/variety/talk shows. By not acknowledging that for business-related reasons their shows are doing real if not legitimate journalism, these comedians as writers and producers preclude themselves from having to be held to the standards the public holds other types news of media to. However, if we for reasons of prestige don’t acknowledge the obvious reality, that these shows function as a source of news for millions of people, then that leaves the quality news programs in a sector where they’re not just competing against low-quality news, but all manner of broadcast video entertainment as well. When all shows which talk about current events are optimized for how enjoyable rather than how informative they are, the shows which were trying to be truly informative lose their competitive edge. They become neglected and irrelevant.
 
The Venn Diagram of “information” and “entertainment” for non-fictional video media is now a circle. There is only fake news, i.e., the normal media, pretending to be real news, and real news, i.e., facts, pretending to be fake news, i.e., presented in a manner optimizing for entertainment instead of quality. What’s more is because there is no standard of credibility any more is that the glow of respectability from teams like those of John Oliver and Jon Stewart which have actually on occasion done excellent coverage of current events extends to other celebrities. Any famous person is now as equally entitled to an opinion on politics or culture as any other. Credentials don’t include a history of experience, association with any particular type of institution, or educational background. What news media qualifies as what opinions to share not unlike an editorial are entirely based on how popular a person is on a given day. And that’s it.
 
Now you’ve got a half dozen shows doing all the same things, but they just consolidate liberal biases in the eyes of millions of people who’d be better off in a world where investigative journalism like 60 Minutes tailored to match the tastes of young people existed. All these other shows suck way more than what Jon Stewart was doing. Trevor Noah is not nearly as good a host as Jon Stewart on any dimension I or others seem to care about.
 
This all coalesces in how Hollywood celebrities as a cabal functioned as more of a propaganda machine for a political candidate not themselves from the arts establishment more than any time in recent memory: Hillary Clinton in 2016. There are other major factors which play into the ugly, amorphous blob that just is all infotainment and celebrity culture, like social media. But I’ve read articles about those. I’ve not seen anyone acknowledge the unique role comedians like Jon Stewart played in shifting the political climate and the nature of public discourse in the contemporary Anglosphere. There’s the one thing Jon Stewart was keeping in mind as a goal even as his other goals were noble, that he didn’t disclose. Because he’s so shaped and influenced our expectations of what news ought to be like, he should acknowledge what corporate media ownership forces him to do. Ultimately, more than to either inform or entertain, The Daily Show and all shows on TV like it have the goal of earning money for their producers and advertisers. After all, it’s show business.

No Positive Reason to Suspect OpenPhil of Wrongdoing

I’m not too concerned with OpenPhil’s grant to OpenAI. I don’t expect the potential conflicts of interest amount to anything. I think there’s a perverse outcome where everyone is self-conscious except OpenPhil/Givewell in this regard. Like, had they not disclosed potential conflicts of interest, nobody would’ve noticed. Relationships existing between two different teams in effective altruism in a professional capacity when there are close mutual friends or spouses working at either organization has been something which has happened in EA for years. It happens in every community in every culture on Earth. It’s not always nepotism.
 
Humans are going to be human. If we want to act surprised people will be drawn and form close bonds with people who share their attitudes, interests and values; whom they spend lots of time with; and whom they’re forced to be exposed to in the course of their daily lives, we’re phoning it in. That’s not to say the questions are worth asking in the first place.
 
What I’m saying is that if it’s something we’re hemming and hawing over when effective altruism is already a community where I expect its members will ultimately tread lightly and not be outrageously disrespectful, we might as well be candid about our concerns. There’s no point being polite for politeness’ sake alone if what you’re doing is concern-trolling or passively-aggressively expecting some subtext to be noticed.
 
The culture of EA is sociopolitically dominated by secular liberalism. I think there’s this self-consciousness in that mindset where the ideal is for equality of opportunity, which we hope leads to equality of outcome, but often doesn’t. When things don’t line up the way we hoped for despite everyone’s best efforts to set up an (as close as was realistic to) objectively fair system, we’re apprehensive. We fear the problem might be us, and that we’re unable to live up to our ideals.
 
I don’t suspect Holden, or anyone at OpenAI or OpenPhil, is culpable in nepotism, implicit cognitive favouritism, of failing their responsibility to try in earnest to do the most good. I think there’s a lot about this grant OpenPhil isn’t disclosing, and it’s odd. I think perhaps they’ve made poor judgement calls or errors in reasoning we can sensibly disagree over. I think the apparent gaps in what OpenPhil’s doing here with such a grant to OpenAI may be filled with concerns over misdeed, but what real mistakes are there are of the far simpler sort everyone makes.

Response to Community-Building in Effective Altruism

The way I wrote it above was strongly worded for effective altruists do gain a background. I think immersing oneself in the more substantive, well laid-out, intentional blog posts is important. My prior comment made it sound like this applies to conversations on Facebook. One problem is some major historical discussions in effective altruism happen in groups like this. But those aren’t well-tracked, and nobody just copies and collects the hyperlinks for reading at a later date.

So what we’re is to follow the important Facebook posts all the time from the right people and the right groups to learn about positions which become tacit common knowledge as time goes on. It’s not just that we’re saying “pay more attention on Facebook”, but it’s like sorting through a puzzle to figure out what sources of information are considered acceptable or not. Effective altruism is an intricate network, and that some people have formed personal relationships over the years of social context making the network more intimate and not outgoing or attractive to newcomers can make entering some sort of “in-crowd” in effective altruism intimidating.

This is a problem Brian Tomasik has talked about in the past in “Why Make Conversations Public”. I think the long-time community members have institutional or community privilege in having the historical advantage of our experience in the community. We’re taking for granted everyone ought to know what we think are the best ideas now. If I think about this a bit though, I can empathize with those who find this attitude somewhat arrogant. These implicit expectations altogether can be intimidating, and can make gaining social traction in the effective altruism community unwelcoming. Like, moral excitement is touted and courted as a motivation for doing the most good, but people who get excited by EA and try to enter get shut down.

 

I think this is a problem that exists online, and if one can join a strong in-person community people form bonds which make them more welcoming to newcomers. While this solves the problem of joining the community for some, it can create a problem for others. Places like the San Francisco Bay Area or Metro London are expensive to live in, and in other ways, the difficulty of moving to these places isn’t publicly acknowledged even if it’s empathized with. I don’t know what percentage of effective altruists feel this way, so I don’t know the true scope of this issue, but I’ve been hearing anecdotes of a gap which generates a dissuasive sentiment for years. I know correcting these sorts of problems is hit and miss for the rationality community, but I know they have a record of trying to debug these problems with mixed success. I guess trying to find some best practices and accelerate the rate at which bugs in community expansion with community cohesion intact are being fixed is what Raymond Arnold is doing with his Project Hufflepuff.

I think if long-time members of the community like myself and others are going to gripe about people not catching up to speed fast enough, or not closing all their procedural knowledge gaps fast enough, have a responsibility to also make inroads to the community more welcoming. This is the sort of thing my friends in the closely knit Seattle Rationality/Effective Altruism community have been thinking about lately.   

I think people from some smaller geographic communities can feel more resentful, but those aren’t the feelings they’d defend. Really what’s the most damaging part isn’t so much a brain drain as it is community leaders form connections with organizations in the major hubs (e.g., Oxford, SF, Boston?), but this leaves a leadership vacuum.
Cultivating a culture of welcomingness and finding ways to socially and culturally invest in local communities all over the place are hard problems to solve. I think a start though would be for the EA Handbook to be updated, spread around or promoted at the level ‘Doing Good Bettter’ gets promoted at, and for there to also be a community organizer handbook written in chapters for tips from various local organizers around the world, as opposed to something centrally written by a single organization like LEAN or CEA. I may pursue online coordination on this sort of project with Project Hufflepuff, the Accelerator Project, Leverage Research, CFAR, CEA, LEAN, or Sentience Politics/EAF, or other groups.