22 Comments

Dear Bear, I just read your essay on the Ballad of Reading Jail in your book and loved it — so thank you.

Expand full comment

Thankyou, I really appreciate that. I've been thinking a lot lately about the figure of the former wrongdoer.

Expand full comment
1dEdited

I've been trying to think of a way to phrase a question I have for Bear as a persona, that doesn't seem trite and obvious. Maybe it just _is_ trite and obvious.

I think a _lot_ of people who are in, or adjacent to, the EA / Rationalism world are shaped heavily in how they think about things by the experience of feeling different, outside, rejected, minoritarian. But that cashes out in an astonishingly diverse array of behaviors and ideological commitments. In some cases people get truly quite warped. (See: https://www.wired.com/story/delirious-violent-impossible-true-story-zizians/ ) You get the neo-reactionaries (and before that the Ayn Rand acolytes) who decide that what the world needs is a ruling elite (which naturally includes them) that's not held back by the peons. But from the same* backgrounds, you get a lot of people who marshal their feelings of isolation and difference into an ethic of solidarity and liberal pluralism.

* At least I think it's the same? I remember reading Rand when I was a college freshman and feeling the pull. I also read some "pick-up artist" stuff in that same era. (I think originally because I was taking a Natural Language Programming class, and somehow stumbled over some creepy page about using "neuro-linguistic programming" to manipulate women)... I can kind of see how a lonely nerd with the same background as I had could've gotten sucked into incel / manosphere stuff. There were definitely times in high school and college when I resented "being friend-zoned".

I had bullies calling me a fag as early as elementary school. It turned out they were partly though not entirely wrong -- I'm a mildly swishy, mildly genderqueer pansexual. But in my adult life, people _frequently_ mistake me for a straight man. My attractions lean heavily towards femme -- though femme-y guys and nb folks are great, and in fact I married a trans nb person. But since _they_ get mistaken for femme by folks who need to categorize everyone, I get mistaken for straight.

I definitely feel like being queer, and in particular the experience of being recognized as, and threatened for, being queer, helped build my sense that it's important that society protect minority identities, as long as they're not substantively hurting anyone.

But clearly it didn't work out that way for Peter Thiel! I guess maybe it's just, he's lucky enough to have won the lottery, and lacks the moral imagination to care about the next kid growing up with his own past circumstances. But it's just so alien. I guess I struggle to project myself into the mind of somebody who, themself, _doesn't_ imagine themself in others' circumstances. Like, the exercise of trying to imagine not caring about other people, makes me not care about the person who has that perspective, and then my brain coredumps and reboots.

I guess the question is, does this sound familiar? Do you think experience as an outsider, particularly because of queer identity, pushed you towards the kind of humanist and transhumanist politics you write about? Do you find it as unsettling as I do that while that kind of path feels logical when you're on it, there are examples that suggest things are much more random and contingent? That potentially quite small changes of circumstance, _not_ changes to what we would think of as our most important experiences, might result in a very different sense of self.

(I guess you said you're writing something about the fluidity of identity already? I'm guessing I will find that very interesting, if it's published somewhere I can read it.)

Expand full comment

I think probably some combination of traits pushed me towards the views I have, and that combination of traits would include queerness, but I think that constellation is very large, and if you changed elements of it the sign of various of my views could change. I think probably a lot of what I say here was set in motion by a sense of being an outsider, a late social bloom, harm OCD, homosexuality, an interest in the humanities, high decoupling, instinctively leftwing sympathies etc.

It's all very hard to know. To take something simple like leftwing orientation- statistically OCD, Anxiety, autism and same sex attraction all seem like positive correlates, but the meaning- the shape that leftwing orientation takes on is affected by these vast constellations of culture, personal experience, and often total happenstance.

Expand full comment

I'm low income and my dad just lost his computer networking job at a finance corporation and is studying AI, so my household's financial circumstances suddenly became precarious. I don't have a college degree and was thinking of getting a certificate to get a Computer Technician job, but I don't know how useful that will be, and your pieces on AI taking away jobs scared me to be honest. Would be curious if you had any advice on skills to learn to weather the storm for an AI future (anywhere from one to five years in the future) Should I just try my best to go back to college instead?

Expand full comment

By computer technician, I assume you mean the manual side of working with computers? In general, I think it's really, really difficult to plan this stuff out. It's possible, even likely, for example, that the increased relevance of AI could raise demand for manual computer technicians. I think the tentative best guess at the moment is that people who learn to do something with their hands are maybe in a more secure position than white-collar workers, but the uncertainties- political, technical, economic- are huge- any predictions about what work will be safest should be taken with a grain of salt.

I guess the best advice very tentative advice I could give is:

1. It's always better to try than to not try, and even if an industry goes under, it's better to have work experience in a now-defunct industry than nothing at all, or low-skilled work.

2. Maybe, at least all else being equal, lean towards jobs that have a manual component.

3. You can't live in fear. Whatever problems you face in the job market will also be faced by tens of millions of others, and on that basis, have courage.

Expand full comment

1. I believe I’ve seen you voice support for “post capitalist” economic structures in the past. Do you think we can get there through hyper-acceleration of capitalism? (And how do you feel about Nick Land’s earlier work in general?)

2. How do you come to terms with your own mortality?

Expand full comment

I think the acceleration of technological growth is wise (modulo concerns about AGI in particular and some biotechnology), but would distinguish this from the acceleration of capitalism. I feel they are often (wrongly) equated. I know little about Land's earlier work.

Oh boy, mortality.

I often long for death when my OCD gets sufficiently bad. I dream of abstracting myself from physicality and living on through my works. Deeply unhealthy stuff, but OCD will do that to you. I wonder if people alive at our time will not be digitally reconstructed at some point, and that gives me a sense that I may survive my own death, depending on the accuracy of the reconstruction and given my views on personal identity over time. I hope to survive till the world improves, which might, among other things, include a full or partial cure for aging. All these factors leave me uncertain as to exactly how soon I am likely to permanently and irrevocably die.

But there is a sense in which, even if we achieve immortality- we will all die slowly by degrees in the end. Everything fades and eventually, we will become so different as to be unrecognizable to ourselves. I'm writing an article on it at the moment.

Expand full comment

For you: I'm a very casual observer of effective altruism - it seems to offer great promise, but I'm also a little wary of the movement around it. Would you say that this a fair summary of the various strands of EA?

1. A philosophical school of thought that argues wealthy individuals (ie in the top 10% of the world, so more than 50% of America's population) have a moral obligation to give more of their money to altruistic causes, particularly ones that have an impact on life-or-death needs.

2. Charitable movements that seek to redistribute wealth and vital medical services to poor parts of the world

3. Animal welfare organizations, particularly ones with a scientific tilt

4. Non-university affiliated startup academic/research groups focused on identifying high priority causes for charitable efforts

5. A very large subset of #4 focused specifically on AI

6. A general culture of utilitarian futurism that has sprung up adjacent to the tech world

For the record, I'm very sympathetic to 2-4, haven't done enough research into #1 to have settled opinions on it, slightly wary of #5, and very wary of #6. Do you have any good arguments for why I should reconsider any of my beliefs, or do you have any recommendations for arguments for any of these strands of EA (or further strands) in an accessible, non-jargony format?

Expand full comment

I think this is a fair summary. By nature I'm not a joiner so I haven't learned as much about EA as I might off- plus I live in Australia- but these points line up with my perceptions. I broadly support its work on AI, animal welfare, disease elimination, etc. It's first-order effects have been remarkably good.

Culturally, I think EA is generally a positive force in a much larger milieu that includes both positive and negative forces - the tech nerd sphere. The tech nerd sphere as a whole is quite scary- e.g. Peter Thiel - but EA provides people who don't fit in with existing left frameworks a way of relating to a broad humanitarianism. Without it, I don't think they'd "get" any better.

Worth remembering that a better question than "Is X ambiguous space good or bad" is almost "Always how can we act fight for our slice of X space". I'm much more interested in strategies for reaching out to EAs than abstract evaluation of their political valence.

I wonder how all this will change as the imminence of AGI begins to dawn on more people. How will those who tried to prepare in advance be seen?

Expand full comment

In a past article on OCD you linked to this blog:

https://notmakinglemonade.com/myblog/2020/2/5/im-so-ocd-about-scrupulosity

I found her descriptions of OCD thought patterns be shockingly accurate. Unfortunately, I assumed the internet was forever and didn't save the blog, and now it's gone. (I even tried to convince squarespace to let me pay the author's bill, but for any number of reasons they didn't let me do that)

Do you have anything from that blog saved?

Thanks bear

Expand full comment

Sadly no. I've looked into the matter previously :-(.

Expand full comment

I'm a senior physics major, currently planning on going to grad school for biophysics. Partially because of you (and the rationalist blogosphere more generally), I want to go into AI safety research. I have a very good CV coming out of college for grad school, but not for AI.

My original plan was to work in a neurophysics lab with something in AI theory, since the physics departments in the schools I want to go to have professors who work on such projects (particularly in generalization capabilities of neural networks.) However, with the extremely fast delopment in chain of thought AI, I worry that there is a significant chance that this is too slow to substantively help AI safety before takeoff. Should I just go with my original plan or is there some way for me to contribute to AI safety sooner? I haven't done anything with ML but I'm a solid programmer and I'm confident I could be of value on the theory side of safety research.

Expand full comment

Okay, so first of all are you going straight to a PhD or doing a Masters first? In Australia, jumping straight to a PhD is common, but in the US I'm led to believe master's degrees are more strongly expected.

Is there a way you could do a grad degree thesis and research on the subject of AI safety, maybe starting with something like the interaction of generalization with alignment?

Expand full comment

In physics it's quite common to go straight to PhD. I'm currently accepted at several top 20 physics PhD programs. I think I could do something in AI, but no guarantees it would be focused on AI safety. It would depend on the professor whose lab I end up at, and what projects they're willing to take on. AI generalizability for example is pretty hot in the neurophysics/complexity theory physics world but alignment/interpretatility, for example, is not.

Expand full comment

Interesting. It's a big money area so I wonder if they just need the right pitch?

A lot depends on resource intensity as you know. The less it costs, the further you can go afield of your supervisor's research.

Have you thought about emailing potential advisors?

Expand full comment

Doing grad school visits right now, so I'm definitely asking potential advisors about their AI research. I should probably ask about how much independence their grad students have in choosing their projects, but my intuition is that starting out grad students are expected to begin with an existing project. Later on I could perhaps pitch a project more focused on an area of alignment.

To be honest though I think I understand that path, I'm more trying to grok what it would take for me to brake into serious ai alignment research in industry, since those teams do much faster work. Deep mind has job applications out but it would have to be a hell of a cover letter to convince someone to take me with no ml experience. Maybe the move is to go into my PhD, and apply as soon as I have a reasonable amount of experience and masters out if I get a job?

Expand full comment

My intuition is that at this stage your best bet is to contact advisors like crazy and try to find one that will commit upfront to allowing you to do some sort of extension of their research into the alignment realm. However, I have limited knowledge of PhDs in the physical sciences.

I wonder if it's not worthwhile advertising your interest on various rationalist and EA sites- seeing if anyone with connections to the area might have ideas about possible supervisors. A post on the Slatestarcodex subreddit, and a personal request to Scott that he share that you are searching for a biophysics supervisor interested in AI and willing to work on alignment with you might be an idea.

Expand full comment

Mmmm posting on the subreddit seems like a good idea. I tried writing on the open thread and got pretty low information responses. Will work on this.

Thanks a ton for answering! Love your work, values, and writing style. I have a ton of respect for your ideas and although I don't necessarily agree all the time you've made me think really hard and I appreciate that!

Expand full comment

For "you" the philosopher: I'm curious whether you would associate yourself with any specific "school of thought" on moral philosophy. Like, would you say you're a consequentialist of some flavor?

Expand full comment

I'm a consequentialist. I tend to think that most questions of good revolve around the welfare of sentient beings, although there may be other factors that come into play in some circumstances. I believe welfare is an extremely rich notion (objective list), not something simple like desire satisfaction, or pleasure.

Expand full comment

Why am i the best philosopher?

Incidentally, i wouldn't subscribe even if i could afford it. Knowledge and creativity are the public commons, and copyright is a crime against humanity. Also no one is entitled to be paid for creativity.

Expand full comment