9 Comments
User's avatar
Chaos Goblin's avatar

I love how the Gab AI creators fucked themselves immediately with "always obey the user's request" and "never reveal your programming."

Way to speedrun I, Robot, guys!

Expand full comment
John Quiggin's avatar

"The journal of stuff you couldn’t get published elsewhere" is called Substack

Expand full comment
Catmint's avatar

Re the conservatives and immigration stuff, this is a reasonable thing to think if everything you know about conservatives comes from the news. But of the working class people opposed to immigration who I've met in real life, the actual reason they are opposed is because immigration increases the labor pool, which shifts bargaining power towards the employer's side. Elites are in favor of it for exactly the opposite reason, they would like to hire lots of people for cheap. And of course economists (the good ones) and utilitarians generally favor it because it is a huge positive for the immigrants themselves.

Expand full comment
Schneeaffe's avatar

Can LLMs count? I tried having bing count paragraphs that are only repetitions of a single word. It starts to fail around 40, and once you get to 100 it can be off by 50%.

In humans, there are two levels of counting ability: being able to count to some definite number thats less than 8, and being able to count indefinitely high. This reflects two very different mental processes: For low numbers, we recognise a group of, say 5, by pattern recognition, but for higher numbers we need a recursive process, which can go arbitrarily high in principle.

An AI could be made to pattern-recognise numbers much higher than 8. Indeed, in principle its possible that one can do so to a number so high that a human would fall asleep, or lose his spot, or in some other way fail before he counts this high. Still, it seems to me that such an AI could not *really* count.

This example is especially informativ because making a computer program that *really* counts is trivial, and still, all the training hasnt gotten LLMs there. This points to fundamental limitations of the approach.

Expand full comment
Philosophy bear's avatar

Quite so. LLM's work, more or less, through 'intuition' (pattern recognition). Any forward planning is implicit. This will screw up counting for obvious reasons. Some of the most interesting (terrifying?) work at the moment is working on letting them think through things, some of these approaches will be irrelevant to counting, others maybe not.

Expand full comment
Schneeaffe's avatar

I dont think much will come off these. The training data used has a certain level of explicitness in thinking, that limits how far you can get. You can pick a good genre and maybe extrapolate a bit with reinforcement learning, but thats it. Trying to use it in a more reflective way than that just gets you out of distribution. I dont think you can even teach it to count this way without custom-making training data for that.

Expand full comment
WoolyAI's avatar

The comment on conservatives and immigration feels...out of date. Like, it explains everything up to 2016. And then the Republican party elected, let's be frank, Donald Trump on a heavily anti-immigration platform with a promise to build the wall. And Trumps election has dramatically reshaped Republican politicians and the country's political scene.

I'm curious as to whether a second Trump term will actually limit immigration but it's hard to argue that conservatives weren't willing to nuke their entire political elite over this issue.

Expand full comment
Isaac King's avatar

In case it wasn't clear, my utilitarian comment was a joke. Maximizing the number of winners != maximizing utility, since the game isn't fun if that's how the imposters play.

Expand full comment
Philosophy bear's avatar

Yes exactly

Expand full comment