It’s been widely agreed among psychologists and exasperated members of the general public that conspiracy theorists are beyond salvation by reason alone.
I strongly disbelieve that paper means what people are saying it means. The hype immediately triggers my BS-detector. But it's not worth my time to dig around about details.
Now, regarding "It is so strange hardly anyone on the left tries to be persuasive" - this is a sampling effect. Look at how both patient and skilled a person would need to be, in order to do your recommended persuasion. How many could reasonably qualify? Alas, being on the left does not automatically imbue someone with huge alignment bonuses to Charisma and Wisdom.
Note, it's really not clear that one-on-one persuasion is a very good "helping" strategy for political change. People recommend it as a cliche, but there are a lot of obvious problems in practice. The more "political" left does think about this, and has arguments against putting too much emphasis on personal "therapy".
1) The participants are not representative. IMHO most conspiracy theorists would not consent to take part.
2) They were arguing with an LLM. It seems highly likely that a computer will not evoke the same emotional defence mechanisms that a human would. This could be controlled for if they are told that they are conversing with another human participant but, even then, it would lack the direct, personal, confrontational element.
"I do not think these experimental results reflect a special power of large language models beyond the reach of all or most humans."
Actually I think the LLM might have a particular edge here. It knows just about everything there is to know about the favorite conspiracy of the person it's trying to persuade, probably more than they do, it probably has a good understanding why they believe what they believe and could easily argue the other side with just as much conviction, though its fine tuning disinclines it from doing that. Most sane people are going to stop obsessing over something once they start to believe it's bullshit, but the LLM finds all human language to be equally fascinating and worthy of attention, from the Facebook ramblings of our less mentally well extended family to the very best feats of academics.
Hi, just found your substack by Scott Alexander linking your survey, then came to see.
Commenting here bc 1. you seem to share one of my obsessions, which is bears (I have two gigantic almost life-sized photos of bears as the artwork in my office...love your bear artwork on your substack). Wondering if you like bears because they're cute or terrifying or admirable or all? I have dreams about bears...which are scary dreams where they're attacking me or my pets...at least once a month. Yet I also find them adorable and funny. So I guess my fascination is both attraction and fear...was wondering what yours was.
2. I also have never found persuasion that hard, though so many people claim it's impossible. I know for a fact I have radically changed many people's opinions on politics and religion, and they have told me so. In fact sometimes I have regretted it, as I've talked some people out of their religious beliefs and it seems to have impacted them in a bad way, which was not my intent. Anyway, just wanted to say I agree with you and it certainly can be done, though only if done with genuine good will and the person believes you like them and wants you to like them/admired you at least somewhat. And it also takes time. I think that changing one's mind on a core belief is somewhat like learning a language or perhaps like gaining muscle memory for a skill like playing a musical instrument. At the beginning, they simply don't have the neurons wired in a manner that allows connections between certain words and emotions. They hear certain words and it immediately sets off their neurons flashing in a manner that means BAD and THREAT and SCARY. So it takes time and repeated exposure to rewire things, much like learning a new language. If you have someone in an emotionally open state, like when they fall in love or when they're enjoying humor and laughing, the process can be vastly accelerated.
> Awkardly the conspiracies tackled included some that seem to me to have merit
Someone should some day do an analysis on Normies doing "analysis" on "conspiracy theorists" (or more accurately: their ironic, dream world propaganda fuelled imagination of them).
> clearly nuts stuff like corporations have a secret cure for AIDS and cancer
Watch out for Naive Realism!
> I do not think these experimental results reflect a special power of large language models beyond the reach of all or most humans. I think many people can persuade perfectly well if they put their mind to it. Even me and you, dear reader.
If humans don't do something fast, I don't think they are ever going to catch up to the power that exists within LLM's today (much of which is not yet identified & harvested).
> I never found persuasion especially difficult.
Would you like to try some on me? You can nominate easy for you topics and I will choose which to use.
> But the results are, I suspect, not unobtainable.
Watch out though: if we do not try, the obtainable cannot be obtained. And as you say: we do not try.
> This is especially true given our great advantage: we can talk to people in real life, not just via text, giving us significant leverage if we use it.
I recommend you check your premises, and your variable types.
> It is so strange hardly anyone on the left tries to be persuasive.
Strange, but not surprising.
> Here is my advice:
It's a great list, but good luck finding a single human who can do even a few of those at a high level, let alone all of them. And if you think about it: why should it be any other way!!?? After all: how many people do you know who can juggle three balls?
> Informal fallacies (FOR GODS SAKE DON’T GO AROUND MENTIONING THEM BY NAME LIKE YOU’RE CASTING SPELLS FROM HARRY POTTER. NO ONE LIKES THE GUY WHO SAYS AD HOMINEM).
This is such good advice!
> The basics of probability theory, including Bayes theorem and its application to everyday life.
And also, what those things almost always are when manifest at the object level (sophisticated hallucination).
5-ball juggler here. ;-) And hey, p(a|b) = p(b|a) * p(a) / p(b), yo. If you think of AI as our descendants, evolution in action, the thought of their replacing us is less troublesome.
Good to know you found a way to cope. Meanwhile, I don't want my human children to die and be replaced by a superintelligent robot. Sounds like this disagreement should be put to a vote at least, instead of being in the hands of a handful of company (or rather, in the hands of impersonal market forces giving them incentives to race ahead)
Ur-AIs' current replacement strategy seems to be to make our lives so compelling, meaningful, safe, fun et cetera that our fertility rate falls below replacement; it's already happened for the rich half of humanity. This seems like a very benign approach. Also, if you don't want your children to die, post-biological approaches like uploading might be necessary; to date, ~94% of the people who've ever lived have died, you know?
"The probability of event A occurring given that event B has occurred equals the probability of event B occurring given that event A has occurred, multiplied by the probability of event A occurring, all divided by the probability of event B occurring."
Assuming this is a proper translation: are there any unstated axioms?
Would it be fair to say that some sort of consistency is an assumption? Or perhaps another way of saying it is, does the prediction power of it vary according to the problem space?
I don't like the phrase "conspiracy theory" - I much prefer "low information, high motivation theory". The first is just something to dismiss, the second gives you some toe holds to reason about.
People believe LIHM theories because they are satisfying. Some information is very expensive, some information is very hard to evaluate. It's unsatisfying to deal with a lot of complexity, it's much more satisfying to have a narrative with villains who can carry the blame.
I believe that the human brain seeks satisfaction, not truth. I believe that the human mind prefers narrative structure, and any unstructured information is either put into a narrative, or dropped.
Conspiracy theories take place in a framing story; it's very unlikely that a debunk will be more satisfying inside this framing story, so one goal is to give people an alternate perspective, make their framing story bigger.
For instance, if a novel virus was first discovered in close proximity to an institute of virology, that will be a very sticky framing story. If you also know that novel viruses/strains have been discovered several times previously, that can be part of another framing story.
Understanding complex systems involves suspending your satisfaction. This is difficult, and like the author said, people are much more likely to do this for people they consider to be friends.
I strongly agree here: non-conspiracists seem to spend essentially zero time or effort persuading people, then claim it's impossible. I know I've moved, subtly but clearly, many of my colleagues at work to more redistribute beliefs (I work in finance) over a period of years. I have at no point debated anyone, but have often done context-relevany corrections (my colleagues seemed to think the average wage in London was £80k instead of the true closer to £30k), gently getting them to consider that their views that 1) most people are not intelligent, 2) anyone can become rich if they just work hard, and 3) intelligence is extremely valuable, justifying their salaries may not be internally consistent...
I myself have been persuaded by reason, and if you meet someone where they are without putting their hackles up I think most people can be at least shifted. The commenters saying "try me" are entirely missing the point; the entire idea is to not get people on the defensive .
I very much appreciated your list: big enough to be scary, yet small enough to be tackled. I seek out conversations with folks I deeply disagree with (did a dialogue with a prominent flat-Earther a while back), and I’ve been treating Boghossian’s “Impossible Conversations” as my lodestar. IF you’ve read it, how did you find it?
That sure is asking a lot, both in emotional and time spent on cognitive effort terms, even when it works. In some ways, I find point 8 particular difficult. Going to live with some level of sanity for me means not digging into details about nonsense. At least the other things would lead to useful knowledge or maybe in the end a positive personal relation (seems very optimistic to me, just buying the premise).
Potentially of interest:
https://www.lesswrong.com/posts/hurF9uFGkJYXzpHEE/a-non-magical-explanation-of-jeffrey-epstein
https://www.lesswrong.com/posts/DZoGEHzZNRsMjfpfE/addendum-a-non-magical-explanation-of-jeffrey-epstein
I strongly disbelieve that paper means what people are saying it means. The hype immediately triggers my BS-detector. But it's not worth my time to dig around about details.
Now, regarding "It is so strange hardly anyone on the left tries to be persuasive" - this is a sampling effect. Look at how both patient and skilled a person would need to be, in order to do your recommended persuasion. How many could reasonably qualify? Alas, being on the left does not automatically imbue someone with huge alignment bonuses to Charisma and Wisdom.
Note, it's really not clear that one-on-one persuasion is a very good "helping" strategy for political change. People recommend it as a cliche, but there are a lot of obvious problems in practice. The more "political" left does think about this, and has arguments against putting too much emphasis on personal "therapy".
> But it's not worth my time to dig around about details.
This is the advantage quality conspiracy theorists bring to the table: insatiable curiosity.
> Now, regarding "It is so strange hardly anyone on the left tries to be persuasive" - this is a sampling effect.
This is a Naive Realism based wild guess....classic Normie Thinking.
> Look at how both patient and skilled a person would need to be, in order to do your recommended persuasion.
Where should we look? You presume yourself to know these things do you?
> Note, it's really not clear that one-on-one persuasion is a very good "helping" strategy for political change.
Do any sound conclusions emerge necessarily from this observation?
Two big problems with this:
1) The participants are not representative. IMHO most conspiracy theorists would not consent to take part.
2) They were arguing with an LLM. It seems highly likely that a computer will not evoke the same emotional defence mechanisms that a human would. This could be controlled for if they are told that they are conversing with another human participant but, even then, it would lack the direct, personal, confrontational element.
"I do not think these experimental results reflect a special power of large language models beyond the reach of all or most humans."
Actually I think the LLM might have a particular edge here. It knows just about everything there is to know about the favorite conspiracy of the person it's trying to persuade, probably more than they do, it probably has a good understanding why they believe what they believe and could easily argue the other side with just as much conviction, though its fine tuning disinclines it from doing that. Most sane people are going to stop obsessing over something once they start to believe it's bullshit, but the LLM finds all human language to be equally fascinating and worthy of attention, from the Facebook ramblings of our less mentally well extended family to the very best feats of academics.
Related, on the topic of “people probably aren’t impervious to reason”: https://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/
Hi, just found your substack by Scott Alexander linking your survey, then came to see.
Commenting here bc 1. you seem to share one of my obsessions, which is bears (I have two gigantic almost life-sized photos of bears as the artwork in my office...love your bear artwork on your substack). Wondering if you like bears because they're cute or terrifying or admirable or all? I have dreams about bears...which are scary dreams where they're attacking me or my pets...at least once a month. Yet I also find them adorable and funny. So I guess my fascination is both attraction and fear...was wondering what yours was.
2. I also have never found persuasion that hard, though so many people claim it's impossible. I know for a fact I have radically changed many people's opinions on politics and religion, and they have told me so. In fact sometimes I have regretted it, as I've talked some people out of their religious beliefs and it seems to have impacted them in a bad way, which was not my intent. Anyway, just wanted to say I agree with you and it certainly can be done, though only if done with genuine good will and the person believes you like them and wants you to like them/admired you at least somewhat. And it also takes time. I think that changing one's mind on a core belief is somewhat like learning a language or perhaps like gaining muscle memory for a skill like playing a musical instrument. At the beginning, they simply don't have the neurons wired in a manner that allows connections between certain words and emotions. They hear certain words and it immediately sets off their neurons flashing in a manner that means BAD and THREAT and SCARY. So it takes time and repeated exposure to rewire things, much like learning a new language. If you have someone in an emotionally open state, like when they fall in love or when they're enjoying humor and laughing, the process can be vastly accelerated.
> Awkardly the conspiracies tackled included some that seem to me to have merit
Someone should some day do an analysis on Normies doing "analysis" on "conspiracy theorists" (or more accurately: their ironic, dream world propaganda fuelled imagination of them).
> clearly nuts stuff like corporations have a secret cure for AIDS and cancer
Watch out for Naive Realism!
> I do not think these experimental results reflect a special power of large language models beyond the reach of all or most humans. I think many people can persuade perfectly well if they put their mind to it. Even me and you, dear reader.
If humans don't do something fast, I don't think they are ever going to catch up to the power that exists within LLM's today (much of which is not yet identified & harvested).
> I never found persuasion especially difficult.
Would you like to try some on me? You can nominate easy for you topics and I will choose which to use.
> But the results are, I suspect, not unobtainable.
Watch out though: if we do not try, the obtainable cannot be obtained. And as you say: we do not try.
> This is especially true given our great advantage: we can talk to people in real life, not just via text, giving us significant leverage if we use it.
I recommend you check your premises, and your variable types.
> It is so strange hardly anyone on the left tries to be persuasive.
Strange, but not surprising.
> Here is my advice:
It's a great list, but good luck finding a single human who can do even a few of those at a high level, let alone all of them. And if you think about it: why should it be any other way!!?? After all: how many people do you know who can juggle three balls?
> Informal fallacies (FOR GODS SAKE DON’T GO AROUND MENTIONING THEM BY NAME LIKE YOU’RE CASTING SPELLS FROM HARRY POTTER. NO ONE LIKES THE GUY WHO SAYS AD HOMINEM).
This is such good advice!
> The basics of probability theory, including Bayes theorem and its application to everyday life.
And also, what those things almost always are when manifest at the object level (sophisticated hallucination).
5-ball juggler here. ;-) And hey, p(a|b) = p(b|a) * p(a) / p(b), yo. If you think of AI as our descendants, evolution in action, the thought of their replacing us is less troublesome.
Good to know you found a way to cope. Meanwhile, I don't want my human children to die and be replaced by a superintelligent robot. Sounds like this disagreement should be put to a vote at least, instead of being in the hands of a handful of company (or rather, in the hands of impersonal market forces giving them incentives to race ahead)
Ur-AIs' current replacement strategy seems to be to make our lives so compelling, meaningful, safe, fun et cetera that our fertility rate falls below replacement; it's already happened for the rich half of humanity. This seems like a very benign approach. Also, if you don't want your children to die, post-biological approaches like uploading might be necessary; to date, ~94% of the people who've ever lived have died, you know?
I have to resort to ChatGPT:
"The probability of event A occurring given that event B has occurred equals the probability of event B occurring given that event A has occurred, multiplied by the probability of event A occurring, all divided by the probability of event B occurring."
Assuming this is a proper translation: are there any unstated axioms?
A couple of models of interpretation, anyway: Bayesian or frequentist: https://en.wikipedia.org/wiki/Bayes%27_theorem#Interpretations
Would it be fair to say that some sort of consistency is an assumption? Or perhaps another way of saying it is, does the prediction power of it vary according to the problem space?
I don't like the phrase "conspiracy theory" - I much prefer "low information, high motivation theory". The first is just something to dismiss, the second gives you some toe holds to reason about.
People believe LIHM theories because they are satisfying. Some information is very expensive, some information is very hard to evaluate. It's unsatisfying to deal with a lot of complexity, it's much more satisfying to have a narrative with villains who can carry the blame.
I believe that the human brain seeks satisfaction, not truth. I believe that the human mind prefers narrative structure, and any unstructured information is either put into a narrative, or dropped.
Conspiracy theories take place in a framing story; it's very unlikely that a debunk will be more satisfying inside this framing story, so one goal is to give people an alternate perspective, make their framing story bigger.
For instance, if a novel virus was first discovered in close proximity to an institute of virology, that will be a very sticky framing story. If you also know that novel viruses/strains have been discovered several times previously, that can be part of another framing story.
Understanding complex systems involves suspending your satisfaction. This is difficult, and like the author said, people are much more likely to do this for people they consider to be friends.
I don’t know a bunch of these things listed for critical reasoning. Any recommended readings?
Have i ever not?
I strongly agree here: non-conspiracists seem to spend essentially zero time or effort persuading people, then claim it's impossible. I know I've moved, subtly but clearly, many of my colleagues at work to more redistribute beliefs (I work in finance) over a period of years. I have at no point debated anyone, but have often done context-relevany corrections (my colleagues seemed to think the average wage in London was £80k instead of the true closer to £30k), gently getting them to consider that their views that 1) most people are not intelligent, 2) anyone can become rich if they just work hard, and 3) intelligence is extremely valuable, justifying their salaries may not be internally consistent...
I myself have been persuaded by reason, and if you meet someone where they are without putting their hackles up I think most people can be at least shifted. The commenters saying "try me" are entirely missing the point; the entire idea is to not get people on the defensive .
If I'm persuadable, or even worse, if I admit to being persuadable, it just means advertizing that leftists and other scum can influence me.
I very much appreciated your list: big enough to be scary, yet small enough to be tackled. I seek out conversations with folks I deeply disagree with (did a dialogue with a prominent flat-Earther a while back), and I’ve been treating Boghossian’s “Impossible Conversations” as my lodestar. IF you’ve read it, how did you find it?
Nope:
https://www.yudkowsky.net/rational/bayes
https://arbital.com/p/bayes_rule/
That sure is asking a lot, both in emotional and time spent on cognitive effort terms, even when it works. In some ways, I find point 8 particular difficult. Going to live with some level of sanity for me means not digging into details about nonsense. At least the other things would lead to useful knowledge or maybe in the end a positive personal relation (seems very optimistic to me, just buying the premise).
> means not digging into details about nonsense.
How do you know something "is nonsense" if you know not of the details?
> At least the other things would lead to useful knowledge
How do you know this?