I have been distracted lately, but I wanted to put out some content, so I’ve gathered my best recent notes. I’ve added one bit of new content at the start. Since this is material I’ve posted as notes on here, proofreading is limited- the rules are similar to Tweeting in that regard.
As always, I’m pretty poor, and I make my content available for free, so if you’d sign up to help me it would mean a lot, both emotionally and monetarily.
Fragility, private uncertainty, public uncertainty, and human uncertainty.
It’s useful to distinguish:
A process is fragile if small variations in the process can flip the result. An election decided by one vote, for example, is fragile since that voter could have easily died, fallen sick, etc.
Uncertainity. A process is privately uncertain if the speaker doesn’t have the information required to know the result. A process is publicly uncertain if publicly available knowledge isn’t sufficient to know the result. A process is humanly uncertain if no human has the information necessary to know the result
A lot of people assume that if something is uncertain, it must be fragile. Not so! They often go together but needn’t. Consider a war, the wisest generals in the world might try to assess who will win it and be unsure, but once the fighting starts, it might become clear that one side is advantaged and only huge variations in the boundary conditions would have changed the result. Yet all human knowledge of military art, all the knowledge of the inventory and soldiers of each army might not have made this clear for example because the advantage might flow from an interaction of factors of a sort humans are bad at foreseeing. The same is true of elections. Just because the result is deeply unknown doesn’t mean it’s going to be close, let alone that a small nudge could alter it. Generally speaking though, the converse holds- complex processes that are fragile are uncertain.
The rigours of self-care moralism
A cluster of traits I often see on a particular type of Tweets which often gets like, 80K likes.
Thin progressivism disguises a worldview that is deeply individualistic and moralistic.
An approach that medicalises problems without offering any relief of responsibility- e.g.your problems are mental illnesses, but if you don’t pay for expensive therapy for your mental illnesses you are In Breach of Your Self Care Obligations.
A clearly religious role is assigned to therapy. Therapy is no longer for specific problems, it is now for the general human conditions. Original sin from which none of us escape requiring redemption has been replaced original mental illness from which none of us escape and which must be treated therapeutically.
Deep voluntaryism about moral obligations. Anyone can prove to be a fake friend, and when they do- or even if you suspect they might, dump them quick.
A problem in your romantic relationships? DUMP THEM.
Complaints about sexual and romantic failings presented as universal sage commentary, but made and liked for deeply individual reasons (someone hurt me/I hurt me).
An overwhelming demand that you should TRUST YOUR INSTINCTS. Instincts taken as ground truth rather than as, at least potentially, a reflection of Gramscian capitalist “common sense”
An emphasis on “freedom” and “authenticity” hiding a demand for only very specific expressions of freedom and authenticity.
An obsession for cultivating the self without any kind of real thought about what is being cultivated and for what purpose.
Other people are failing you. They are treating you in a fucked up way. You, however, are only failing to sufficiently Care For Yourself.
Constant affirmation presenting itself as unconditional positive support, but is really deeply conditional about what kind of behaviors are desired.
Why aren’t you focusing on condemning….
"Why are you complaining about Bush, and not Saddam". "Why are you protesting Israel and not Syria". "Why are you calling for Biden to step down and not Trump". Do these people think the purpose of speech is to keep a running tally of the good and bad things? If we found some civilization through a telescope in Andromena and could hear, but not influence them, would they say: “Why are you calling for Biden to stand down rather than spending all your time calling for the resignation of emperor Zzyyzzxan the bloody?”
Optimal condemnation frequency is determined not just by badness, but by marginal probability of your condemnation having an effect.
The welfare of the dead
Freddie de Boer wrote this lovely post:
substack.com/home/post/p-146431578
I want to register a point of disagreement though. I think one can harm the dead. I think there is a profound sense in which we are ongoing projects, ideas, legends, tropes etc. Understood at this corporate, extended level, biological death is a significant moment in life, but it is not yet the end. Our agency continues to ripple out. Just as our memories acted as a continuous stream altering our experience in life, so they flow out like tributaries to the living. This flow of action and agency is what I identify with much more than body and even mind, and it can still be harmed posthumously. You can even retain a simulacra of the power of thought and action- every time people’s choices and thoughts are influenced by an accurate appraisal of what you would think and do in some situation.
Of course, the debate here is axiological and conceptual rather than metaphysical but it matters. You can identify your interests with whatever you like. But a lot of people implicitly identify with this extended view of the self, I think it gives some comfort, and it’s important to think about this in how we consider the dead. Even if you reject it utterly, many accept it- through their actions and feelings, if not their beliefs. That alone gives us reason to consider the interests of the dead, out of respect for the sentiments and hopes of those who think biological death is not quite the end.
Sometimes, justice requires harming the dead, and this may be the case in the Alice Munro case- not because harming her is intrinsically good, but because this story must be told.
Leftwing transhumanism
There’s a debate on Twitter at the moment on PrEP. “Why use PrEP when you could avoid AIDS by not having gay sex? That’s what the world is trying to tell you. That’s the natural consequence.”
Where does a leftwing transhumanism start? Perhaps in the eyes of the enemy, in the recognition that conservatives are terrified of technologies that allow us to alter the world and remake it kinder. This is because of the harsh constraints they claimed only they had sober knowledge of, and regretfully called us to obey. Those harsh constraints weren’t really constraints to them- they were the crown of life and the giver of purpose. The regret couldn’t be more fake. Human life shaped by horrific fears of disease, hunger, and death- that’s what, at least in their purest form they want. They want us to be governed by our own biology, our will bent low into the slave of the flesh. Ever fearful of death and a thousand natural shocks. No fags, no sluts, all and only women to do the labor of pregnancy, watch out for those strange immigrants they might give you a disease, nations competing like organisms for survival, violence perpetually on the table.
To be conservative is to want human life to be limited in countless ways that have nothing to do with kindness, or even virtue rightly understood. In response a transhuman left must propound: Human freedom is be completed not just socially and politically but biologically- so we can be ourselves and be together in a way that overthrows not just human tyrants, but the oldest tyrant of all. Promethean fire raised in defiance of those who would rather we went cold- that's left transhumanism.
Liberation cannot be a wholly political task, because liberation is not complete until no one suffers from mental illness who does not want to. Until all remaining pain and suffering enriches life rather than twists and distorts purposelessly. until everyone who wants to walk or see or hear can. Until no one dies unless they are ready.
I think that a lot of leftists think that the problem is with people who recognize the existence of hard tradeoffs- that in recognizing these tradeoffs we let neoliberalism, capitalism or whatever “in the door”. In truth, hard tradeoffs are everywhere. The problem isn’t acknowledging the existence of hard trade-offs, the problem is that the people making them are biased in favor of the rich, for a variety of reasons.
The next two years in AI
Here’s my sense of where we’re at with AI in the medium term (~2 years) if something doesn’t perturb the current course. Keep in mind that it is more likely than not that something will perturb the course.
There’s a customary joke for starting these things “Making predictions is hard, especially about the future”- nonetheless I think it’s worthwhile to lay out predictions every now and again, even if only to understand how you’re implicitly thinking about things at the moment.
There are four core challenges that need to be met before language models can replace almost anyone who works with a computer. 2/4 are summarised in OpenAI’s path to AGI document bloomberg.com/news/articles/2024-07-11/…- the other two, the last two, are more minor.
Agency- LLMs need to be able to control a computer. Various efforts are going on in this regard. Chuck in audio input output here- which is a solved issue but not yet rolled out.
Reasoning- At present, LLMs are sufficiently bright to replace a lot of office workers, but there’s a lot they’re just not up to because of reasoning limitations. An obvious and replicable area here is coding. Off-the-shelf LLM’s can’t code up to the standard necessary, and certainly not the standard expected. Other areas include complex legal reasoning. Engineering etc. etc.
Memory- Topline LLM’s these days have context windows in the hundreds of thousands. Some have context windows in the millions, but there area also worries about the context window as a unit measurement, since performance over a context window can be very inconstant. Although for many purposes 200,000 words is fine, better systems for managing memory are obviously necessary for some kinds of long term project- especially considering this is both input and output length.
Hallucination- there’s been a dramatic drop off in hallucination rates, and honestly when I use Claude Sonnet 3.5 I feel that the hallucination rate may be lower than that I would get from probing a human with tricky questions. Nevertheless, I think concerns in this area mean many people want guarantees.
Once these problems are solved, you have an office worker that can do about the equivalent of an hours work for ten cents in a few minutes.
Now if these problems- especially problems 1 & 2- are resolved in the next two years and if we have our 0.10 cents for an hour’s work office worker I anticipate political anarchy.
>Mirabilis dicta, an increase in the amount of AI discourse by as much as a factor of ten
>No big spike in unemployment just yet but a sense that it is looming. Labor market effects definitely starting.
>Calls to ban AI. Calls to prosecute people associated with AI.
>Radicalisation in pro and anti AI sectors
>Confusion and consternation in the education sector previously unimagined.
>A scramble by ordinary people to find employment ‘safe harbors’’
>Scramble by capitalists and economists to predict the safe harbors for capital. Huge ballooning of popular and academic work on the subject.
>Technological progress blindness - “It can’t do manual jobs yet this means we can plan as if it’s only ever going to take PMC office jobs”.
>Politicisation as follows Blue=AI bad, AI threatens our constituents’ jobs Red=AI good, AI hurt lazy email job preachy moralists.
>AI as topline geopolitical issue
>Mass liminality- the looming sense that the world as it is is finite- and we are encountering it’s limits. Uptake of AI apocalypticism. That doesn’t mean such uptake is correct- it’s possible that AI will stall at being able to replace office workers, and no singularity will happen, but people will feel like they are living in millenarian times at least for a while.
On the other- if these problems, especially 1 & 2 do not have a great deal of progress in the next 2 years, even if the progress is in some ways impressive, I predict:
>AI crash narrative. More similar to the dotcom bubble but with numerous analogies made to Bitcoin and NFTs- a poor analog, but attractive because of the sociological overlap.
"Liberation cannot be a wholly political task, because liberation is not complete until no one suffers from mental illness who does not want to. Until all remaining pain and suffering enriches life rather than twists and distorts purposelessly. until everyone who wants to walk or see or hear can. Until no one dies unless they are ready."
Thank you for sharing such thought-provoking notes.
Here are a few reservations I still have about the quote from your post.
Let's call A the starting point, which is the human condition as we know it: a life exposed to all sorts of limitations and suffering irrespective of our choice (mental illness, pain, physical impairments, and ultimately death).
Let's call B the transcendence of such limitations.
Now, at the core of some popular religious beliefs it looks like (I'm by no means knowledgeable on the subject) the gap from A to B can be accomplished on an ideal plane, on a plane other than the physical reality and according to laws that don't apply to the physical world whose workings we have little to no clue about.
What about transhumanism? Does it propose a way of bridging the gap between A and B (or of achieving B for those who wish to) by means of scientific progress? In that case, what lies between A and B? What does the path between A and B look like? Besides, will it be a compulsory path for everybody or an opt-in path (open to all? Means-tested or conditional on other kinds of tests? Paid?)? What will that path look like in terms of costs, sufferings, mental illness, pain, death? Will uncertainty on achieving B (or not achieving it for all) is factored into the eventual costs?
When christianity wielded secular power, it forced all sorts of unpleasant means on reluctant people in the name of saving humanity and making it deserving of achieving B in the after-life.
What about the transhumanist approach?
As they say - the road to hell is paved with good intentions.
I don't mean to say, let's do away with good intentions, but just let's thread carefully when we think about tinkering with complex systems we don't fully comprehend hoping to make them more suitable to the way we wish them to be.
Also, a less politicised and tribal platform of free public discussion might be a desirable starting point, I think.
Upgraded to paid! I hope it helps!
"Once these problems are solved, you have an office worker that can do about the equivalent of an hours work for ten cents in a few minutes." - I think you're overestimating, at least partially, how much damage this will do to the middle and upper middle class of rich countries in the medium term. In fact, I think it's plausible that a lot of PMC and PMC adjacent people could pretty enthusiastically embrace AI over the next few years:
- People whose job is mostly sales/customer service/schmoozing and who hate or are put off by the technical-paperwork side of things (real estate agents, family dentists, contractors, email job types, middle management, quite possibly lawyers, accountants, and doctors)
- Weird nerds in tech jobs who can use AI to automate a lot of drudge work but have enough technical knowledge in key areas (including how to use AI) that management will balk at letting them go
- Creatives/marketing people who can make good judgment calls about how AI-generated advertising and publicity will look to the public
- Hell, tiredness with social media AI goop could help out traditional journalism
I'm sure at least some capitalists will use AI advancements to harm the working class, and there will be some long-term consequences of AI that will go ignored in this rush. But I think the effects, rather than "political anarchy" and "radicalization," could very well be excitement or relief. Maybe analogous to how some greentech advances have cooled climate change radicalization somewhat in recent years.