I don’t know a lot about Caitlin Johnstone, mostly I know her as a critic of United States foreign policy. What I do know is that she’s right that most people are missing the real dangers of online censorship. She writes:
“But far, far more consequential than overt censorship of individuals is censorship by algorithm. No individual being silenced does as much real-world damage to free expression and free thought as the way ideas and information which aren't authorized by the powerful are being actively hidden from public view, while material which serves the interests of the powerful is the first thing they see in their search results.”
Let’s expand a bit on this line of thinking.
The effects of algorithmic censorship
I'm going to start with a very brief sketch of what we know about where algorithmic censorship is up to.
We see when a platform bans someone. We generally don’t see the changes that adjust how much we see of this or that. I’m not talking about shadowbanning here. Like banning, shadowbanning is a very vulgar and direct form of power. The most effective forms of censorship are more subtle.
Algorithms more complex than any single person can understand curate your feed. Before you see anything, these machine learning run algorithms consider the content of the post, who the writer is, who the people responding are, and many other factors.
We know that tech companies sometimes consider the political outlook of content when designing algorithms. Companies have specifically targeted political categories such as right and left. From a story in Mother Jones:
“Republican lobbyists in the DC [Facebook] office said, ‘Hold on, how will it affect Breitbart?’” recalls another ex-employee. Testing showed that the proposed changes would take a “huge chunk” out of Breitbart, Gateway Pundit, the Daily Wire, and the Daily Caller. There was “enormous pushback”. They freaked out and said, ‘We can’t do this.’”
The code was tweaked, and executives were given a new presentation showing less impact on these conservative sites and more harm to progressive-leaning publishers—including Mother Jones.
Thus the selection of what you see then is, at least sometimes, an explicitly political process. Political actors (lobbyists, even) help design the algorithm that determines what you see.
Other than that it happens, we don’t know very much about the explicitly political selection of content. Good business sense would be to keep it a closely guarded secret.
What we know much more about is the implicitly political selection of content. Tech companies, particularly Youtube, Facebook and Google, reduce access to sources they consider disreputable. They justified such changes by a perceived wave of misinformation after the rise of Trump, reaching its zenith with the coming of SARS-CoV-2.
Partly the exclusion of disreputable sources was achieved by directly targeting them. For example, Facebook tells page and group managers that it will penalize them for stories that have been rated poorly by “independent fact-checkers”. This can get truly absurd- for example, once Facebook told me off for sharing something from The British Medical Journal that had been fact-checked- The British Medical Journal contested the fact check. See here for coverage.
It’s not just fact-checking though. Another part of trying to exclude “disreputable sources” is promoting results from mainstream media companies. The effect of shifting attention to mainstream media sources is to narrow the range of accessible political opinions. The political space of mainstream media has The Guardian on the leftmost pole, Fox News on the right-most poll, and The New York Times in the middle. The resultant political space is somewhat left of center by American standards and very right of center under a global standard.
I’m no fan of misinformation. I indulge in a smidgen of conspiracism (I don’t think Epstein killed himself because he didn’t), but I am mostly, uh, reality-aligned. I have sympathy with those worried about misinformation during a pandemic. However, we don’t want the cure to be worse than the disease. It is not just bizarre claims about the vaccine & virus that are being targeted.
The quantitative effects of the changes on outsider media are huge. Consider, for example, the Socialist Equality Party. I’ve chosen them, in part, because I’m no apologist for the SEP. Their anti-union stance drives me nuts. They’ve got some very weird views on stuff that I’d really rather they didn’t have weird views on, such as Roman Polanski. They’re a minuscule, ineffective, sectarian Trotskyist grouping with outdated politics and troubling views on topics like sexual assault. Their analysis has little shade or nuance.
But, they do some good reporting on The World Socialist Website. They also sustain a good standard of factual accuracy- granted even by their political critics-, often ahead of that of comparable mainstream news organizations. Nonetheless, algorithmic changes squelched their traffic by 67% in the 4 months between April 2017 and August 2017. Factual accuracy offered them no protection.
Google would say they didn’t target the WSWS, rather they prioritized the establishment press. The establishment press, they would argue, is held to certain factual standards. Non-establishment press varies wildly in its quality. Google can’t be expected to evaluate each website or article piecemeal. That’s likely all true. However, I don’t need to spell out for you the fact that the establishment press is not politically neutral. It is owned almost entirely by a tiny minority of people. Its crime reporters generally have close connections with the cops. Its foreign policy reporters are almost always embedded with the national security state. Its economic reporters are peas in a pod with the business community.
In other words, the mainstream press is an ideological apparatus whose separation from the state is often merely legal and formal. It acts as a mouthpiece for the establishment as a whole, or for sections of the establishment on issues that divide elites.
The character of algorithmic censorship contrasted with the character of normal censorship
Now that we have the factual background in place, it's time for the meat of the piece- The philosophy of censorship.
Our whole way of thinking about censorship- a critique that can be broadly identified with the 19th-century liberal philosopher John Stuart Mill- isn't prepared for content curation in the age of digital monopolies.
To talk about this sort of stuff rightly, you have to sound like a bit of a Foucauldian. Algorithmic censorship has features which mark it out from how we normally think about censorship. Many of these features make me want to use words like biopolitical and panopticon.
I want to emphasize that these features are mostly not new in the history of censorship. They might seem new because a lot of effort is spent miseducating us on censorship. The algorithm applies the censorship of making the conversation rather than the censorship of excluding from the conversation. The censorship of making is not an unusual case in the history of censorship. I would conjecture, it is historically the most important form of censorship. We think of the censorship of exclusion as the most important form of censorship, because thinking of censorship that way is a pillar of liberal ideology. Let’s rethink the core differences between excluding censorship and making censorship.
Fair warning: to better express the logic of a feed algorithm, I’m going to blur the line between where we are now, and where these trends could lead “in an ideal case”.
Firstly, algorithmic censorship is inside the conversation, not merely around it: The algorithm is not a boundary keeping things out of the conversation. It is not a boundary corralling the conversation within certain parameters. Rather the algorithm censors content in the way the vascular system censors blood. Although some things are plainly disallowed, there is no such thing as merely “allowed”. The algorithm assigns a portion of screen space from a subset of users to every bit of content.
Secondly, there is no ontological gap between censorship and conversation: It’s not just that the algorithm is in the conversation. Rather, the algorithm has influenced generations of the conversation, rather like evolution by natural selection. Thus it has made the conversation. Creators that thrive under the algorithm rise to the top. The behavior of others is shaped by the algorithm, for two reasons:
1. They consciously or unconsciously model the successful creators
2. They see that their work is more successful when it is algorithm-friendly. This shapes their behavior like a rat pulling levers in a Skinner box.
In turn, the algorithms are shaped and reshaped as social media executives see what is good for the company. The line between the algorithm and the content is blurry at both ends.
Thirdly, algorithmic censorship operates in an ecological, rather than a regulatory manner, thinning and managing, rather than trying to eliminate: There is no pretension of eliminating unwanted views. A smattering can be quite useful. Such a smattering legitimates the site, and prevents dissidents from looking for kinder skies. This is particularly true if the algorithm can limit viewers of that smattering of dissidence to those specifically looking for it. In the ideal case no one who didn’t already believe the dissident material would stumble across it.
Fourthly, the censorship is experienced as an aid to the conversation, rather than a hindrance: Censorship is not something that makes the discourse harder, on the contrary, it is the ground condition of being able to talk at all! Whereas other, more obvious, forms of censorship might have you losing your words, the algorithmic structuring of a media platform is experienced by most website visitors as an aid to finding the content they’re looking for. Thus resentments don’t accumulate, as they do against most censorship regimes, against the algorithm. Rather, resentment is focused at the iceberg tip- bans and shadowbans.
Fifthly, the censorship is "private" not "public": Some people are going to balk at this, maintaining that nothing can be true censorship unless it is public. Nonsense, consider the Hollywood blacklists during McCarthyism. These are enormous monopolies with intimate state connections we're talking about, not some boutique website selecting contributors.
This is not to say the state is absent- far from it. The censorship happens in a weird nexus of quasi-state, quasi-corporate interests. Johnstone, for example, mentions the fact-checking role of the US state-funded Atlantic council. We might talk more broadly of the ideological apparatus of the state, e.g. the so-called “foreign policy blob”.
Yet in other senses, the relative distance of the formal state matters. The censorship is wielded to defend specific, not general accumulation. Directly, the algorithm is used for the defense and advancement of particular companies, not capital overall. Indirectly, of course, it serves all these things, but in the most final and direct sense, shareholder value is king.
Sixthly, this censorship doesn’t have clear, publicly known, targets or goals: When we think about censorship we often think of circulated lists of banned material or topics, yet algorithmic censorship is very different. Often, the companies hide the details of what the algorithm wants. This is for many reasons, one of them is so that people can’t hit it with content perfectly designed to be on target, but not in line with the spirit of what they are looking for (c.f. Goodhart’s law). In other cases, they might make aspects of the algorithm very clear- e.g. during the disastrous attempt to pivot us all to video. But, to a large degree, we are not meant to know the rules that prioritize content, even as these rules govern our conversation.
Also unclear- is the extent to which companies are motivated by direct financial interests, versus the extent to which companies are motivated by longer-term interests of cozying up to state and political actors.
Seventhly, algorithmic censorship is so pervasive that it makes imagining an alternative difficult: Algorithmic selection of the content we look at is so pervasive that it’s difficult to even imagine an alternative. What would a reasonable, politically neutral principle for serving up content on platforms look like? Most people seem unaware of the issue. Almost all criticism of the political role of media companies has focused on bans, rather than their algorithmic powers. Almost no one is demanding an unfiltered feed. As the saying goes, we don’t see it like a fish doesn’t see water.
A different feed is possible. We can't escape algorithms- any content serving formula is an algorithm. But it would be possible to create an algorithm that works only on a bare, content-neutral minimum. Such an algorithm would consider only factors like views since posting, number of positive interactions with the post, number of comments, and so on. If these factors are too slim, one could also throw in a past success rating- how often has content posted by this user been popular in the past? I concede that there will be great philosophical difficulties in separating out the content-based from the content-neutral, but some things are clearly more in one direction or the other.
Eighthly, this form of censorship challenges liberal categories around censorship: The most obvious challenge is to the public/private dichotomy which animates liberalism. As we alluded to above, in a formal sense, Twitter is a private space, not a public square. Thus according to classical liberalism, it’s fine to censor. Yet If a huge chunk of the conversation is happening in one place, is it fair to treat that as private, in the same way, a saloon might be treated? What about Twitter's returns to scale and network effects?
However, the conceptual difficulties in the liberal framework are deeper than this. Even the core conceptual category of censorship discourse- the idea that some things are censored and others are not, breaks down in algorithm world.
Another category distinction in liberal thought that gets put under tension in this context is the distinction between regulating form and manner and regulating content. I hope to expand on this in a later essay.
We might also wonder how applicable the Millisian defense of free speech is. Whether it could be updated and whether it should be updated.
Finally, this censorship is sold to liberals as a weapon against ignorance and the right, but above all hurts those to the left of the current Overton window:
The right has its own communication channels. Right-wingers want right-wing content and will seek it out, so media companies give it to them. Despite disturbing moves like banning Trump from Twitter, there are limits to what media companies can do against the right. Even rightwing stuff that is heavily suppressed- pandemic denialism for example- is popular enough that it can’t be squelched quietly. Thus the right is mostly unharmed, if anything they wear it as a badge of honor. To the extent it has any effect at all, it may have been to weaken some of their sillier elements- e.g. Alex Jones.
The actually-socialist, left of liberal, far left, however, is a vastly smaller formation in American life. It is not well-rooted in the culture. Thus algorithmic censorship is a far bigger threat against it.
Resistance
At this point in the anthropological inquiry, it would be traditional to try to flip this around and tell a story of how the plucky resistance is subverting the algorithm and power is never total. No doubt that’s partly true and will be for the foreseeable future. However, machine learning algorithms are going to get better and better, and megacorporations are likely to get more deeply integrated into the state. My outlook is gloomy.
I don’t have solutions. But I do have three thoughts on possible strategies:
* Clearly, It is necessary to create websites that are, as far as possible, outside this hegemony. Some kind of independent platform that catches on. Maybe one of the open-protocol social media sites people have been working on. But I want to give a caveat, I’d say that one of the biggest mistakes that could be made would be creating an explicitly politically branded platform, tied to the left, right, libertarian, or even radical centrist. Lord forbid I never thought I’d find myself saying this, maybe we need to tie into that contentless resistance vibe of the 90’s. Create something that is genuinely politically open in its amorphous opposition to “the man”. I never in my life thought I’d find a practical use for this sort of Ad-Busters politics, but this might be it.
* We need to get clearer on exactly what is we’re objecting to, and what our alternative is. One can’t say that one objects to algorithms selecting to content one sees because every possible method of selecting content by computer is algorithmic. Instead, we need to start thinking about what we’re willing for an algorithm to do, and what we’re not willing. We need to start envisaging what content-neutral or content fair algorithms could look like. An algorithm that only considers metrics such as likes and views is one possibility, but there are many possibilities to explore. For example, giving every user the choice of which third-party designed algorithms they’d like to employ.
* A friend of mine has suggested that we should demand the creation of a national social media infrastructure owned by the state. This would by no means end the censorship, but it is easier to hold the state accountable for content choices than quasi-private monopolies that are thick as thieves with the state. A provocative suggestion well worth discussing. I guess that a state-run platform might be less indiscriminate and prolific in what it cracks down on, but when it did crackdown, it would crack down harder. Still, worth thinking about, though on balance I am opposed.
>There is no pretension of eliminating unwanted views.
You must be talking about social media networks that aren't Facebook. I haven't used Facebook in years and even I've heard about their polices re 'misinformation.'
>A friend of mine has suggested that we should demand the creation of a national social media infrastructure owned by the state.
Incidentally, I would tell your friend to be careful putting the word 'national' right before 'social,' unless they mean this to be a low-pitch dogwhistle
You're using high-tech toys to avoid talking about what is ultimately a more meaty, less silicon-y problem: Boomers and their descendants don't feel like standing up for principles, so a minority of popular-kid psychopaths has managed to recruit hordes of hyenas that can achieve anything via loud screeching.
Much of what they want to achieve takes the form of "I don't like him! Make him stop talking!"
Leftists know something about hormones now, don't they? The root of our problems is the human biomass. If you can tell me why adult male testosterone levels have dropped 50% in the last half century, I can tell you why we've gotten more cowardly, conformist, and, literally, weaker.
Reasons for algorithmic filtering:
1. Breaking limitations of human attention (Paradox of Choice, Hick's Law etc.)
2. Condensation of social consensus (Language models & Knowledge Graph formation)
Reasons against algorithmic filtering:
1. Lack of transparency in filtering (AI black box model)
2. Inability to do counterfactual research (see https://desystemize.substack.com for more)
3. Lack of decentralization and factions of information (contra Revipedia)