Why I don't think identity verification will save us from the coming bot-swarm
I wrote an article a couple of days ago tracing out the implications of medium term AI. One of my predictions was that we’re going to be inundated with a plague of bots, and these bots will serve the interests of their well-to-do masters. They’ll take over the net.
A number of readers have objected that when bots that can’t be distinguished from humans overrun the net, humans will get sick of interacting with bots, and demand websites where users have to verify they’re humans- e.g. by presenting ID.
For example, u/silentconfessor wrote in the comments on Reddit:
It wouldn't be impossible for social media platforms to crack down on the flood of humanlike bots. All they would need to do is enforce human authentication. Forget a phone number to join Twitter: you need to show them legal ID. Obviously everyone will complain about this and some bots will still get through with help from a human but it's the logical response once bot and human activity on the platform becomes 100x harder to distinguish than it already is.
I was aware of this possibility as I was writing, but didn’t give it much thought, for reasons I’ll get to soon. Though I was right to dismiss it, I was wrong to dismiss it so hastily, because it’s quite possible these considerations will shape the bot plague, and it’s a very important consideration in hindsight.
Now here is the answer I would give to u/silentconfessor. I think it'll help, for a while, ultimately, however:
1. From a broad methodological point of view, and even before getting into specifics, offence usually ends up trumping defense in these areas. In this arms race, it's going to get harder and harder to detect bots over time. If there’s a way to get around the verification process it’ll be found. Water tends to flow downhill. Intelligence that wants to intervene, human or otherwise, will generally find a way to do so.
2. The interests of powerful people and institutions come into play. Some bots will likely be allowed through for nat-sec reasons. It might be relatively easy to keep GRU bots off Twitter (it *might* be *relatively* easy) but it will be much harder to keep CIA bots off, and yes the CIA does run bots. Rich people too will use their money to batter against the walls of the walled gardens. Ultimately, the only effect of these restrictions might be to keep the bots of the relatively poor and unconnected away.
3. I think that a lot of platforms will want content generating bots. Remember the media they generate will probably be of high quality, maybe even in some sense superhuman quality. It could be an opportunity to generate vast amounts of tailored content for each user. There's every chance the platform owners will be the ones running the bots.
4. Even if the bots are kept off, say, Facebook and Twitter, content will be produced elsewhere by the bots, and humans will see it and will bring that content to Facebook and Twitter.
5. Some human users will get themselves verified, then allow bots to do some or all of their writing for them, either because it’s an easy way to produce well-liked content, or because they share the ideological viewpoints of the bots. EDIT: or, as one commenter pointed out because the bot owners pay them.
Let me just briefly expand on one underlying point here that I think it is crucial to bring home: People are going to like these bots. They’ll be funny, concise, quick-witted and knowledgeable. Not only does this mean that people will seek out and share their stuff, it also means that the big social media platforms are going to want to include bots to some degree. I think that people are thinking it’s going to be much easier to keep these things away than it really is, because they’re imagining shrill, repetitive bots like the bots of today. If these bots are ever shrill and repetitive, it will be for strategic reasons.
But I think u/moridinamael of Reddit gives us another good reason to think that getting rid of the bots won’t be that easy, cyborgization. Human writers will blend with the bots, and those bots will be constructed to be biased towards certain value systems:
The idea that all spaces on the internet will be inundated with bots seems like a half-completed thought process that prematurely truncates the discussion of the consequences.
Once it becomes widely understood that any given tweet or reddit comment could be the product of an AI pushing an agenda, there will be a predictable reaction. Anonymous online spaces will become desolate very quickly, as humans abandon them for user-verified spaces. This process will likely start slow and then accelerate quickly until all the "users" of anonymous spaces are just AIs talking to each other. Online anonymity will not be impossible, but will just be sort of pointless.
At a certain point in the reasonably near future, every bit of your interaction with your technology will be suffused with intelligence that is smarter than you, at least at the particular task you're doing. Your word processor and email client and ultimately your reddit comment box will be first suggesting little tweaks to your wording, and eventually proposing significant revisions to make your points sharper and more succinct, and likely moderating your tone, and eventually simply understanding what you want to say and giving you the maximally compelling and persuasive text for communicating your intent. As this happens gradually, you will lose patience for lazily written content, and so this effect will also snowball. (The AIs will also be able to make your writing read "like a shitpost" except still extremely compelling. Don't think that people will ever prefer to read something you wrote over something an AI wrote for quality reasons; if they ever prefer the writing of humans, it will be for other reasons.) At this point the problem is not "bots", the problem is that we will be embedded in a world where communications are completely mediated by machines. You'll be unsure what your own opinion is anymore.
You can already take your post, feed it to GPT-3 and tell it to make the writing more persuasive and concise, by the way. I did it with this post. Could you tell?
Again, for the record, I bet you that the advisory bots u/moridinamael alludes to will subtly putting their thumbs on the scale of particular political viewpoints.
In sum then, I concede that user verification will likely slow down the process somewhat, but there are too many gaps in the bricks of this walled garden.