Republishing an old essay in light of current news on Bing's AI: "Regarding Blake Lemoine's claim that LaMDA is 'sentient', he might be right (sorta), but perhaps not for the reasons he thinks"
Republishing an old essay in light of current news on Bing's AI: "Regarding Blake Lemoine's claim that LaMDA is 'sentient', he might be right (sorta), but perhaps not for the reasons he thinks"
I am republishing this essay because of recent discussions about erratic, ‘emotional’ and aggressive behavior by Bing’s AI, Sydney. There has been some discussion about whether it’s ethical to run Sydney given that behavior. People are responding to such claims with “don’t be ridiculous, of course Sydney doesn’t feel anything, Sydney is a machine for predicting the next token of text”. While I am inclined to agree with that conclusion on the whole, I think the issue is a bit more complex. For ongoing discussion, see:
Turning your argument around, you actually seem to be saying that since a simple architecture is enough to convincingly simulate persons, perhaps personhood is much less complicated than is usually believed. For instance, people might consist of heaps of small/simple personas, and when we simulate someone we are actually simulating a persona relevant to the application we have in mind, such as someone's often-used mannerisms, or their stance on a particular question that we are thinking about. Is this in accord with your thoughts?
Turning your argument around, you actually seem to be saying that since a simple architecture is enough to convincingly simulate persons, perhaps personhood is much less complicated than is usually believed. For instance, people might consist of heaps of small/simple personas, and when we simulate someone we are actually simulating a persona relevant to the application we have in mind, such as someone's often-used mannerisms, or their stance on a particular question that we are thinking about. Is this in accord with your thoughts?