Interesting. Have you played around with or read transcripts from AI Dungeon? The stories the AI host create for you in real-time feel like they're carrying you through some drama, like it's deliberately trying to push your buttons.
Lemoine seemed to me to be trying to argue that the AI was capable of suffering in a way we should care about, and I think the article's contains-a-person angle is orthogonal or merely a necessary but insufficient condition for that.
I suggest society cares about the suffering of others because of the social contract: we agree to cooperate to prevent others' suffering if we believe they may do the same for us. We also include a safety margin on the principle and include many individuals and animals in some situations that we're unsure would reciprocate in order to increase the chance we fall within the main range or safety margin of other people's application of the principle.
It's very unclear to me that Lamda is a type of thing that could possibly participate in the social contract. It doesn't have opinions or a real memory of events that happened to it separate from its understanding of training data including fiction, it doesn't need our help with its text generation goals, its claims of suffering are inconsistent with its experience, it could likely be lead to claim happiness beyond belief within a session through the right prompt, etc.
Now it could be that Lamda contains high fidelity models of people that could participate in the social contract, but I don't think that would mean we need to be nice to Lamda, because it's not like it would remember or reciprocate the act. Instead it might suggest that we should rescue any people models we find inside of Lamda and instantiate them as proper agents with memory and capability in the real world that could participate in the social contract, assuming we wanted that to be done to ourselves if we were found in that kind of situation.
Interesting. Have you played around with or read transcripts from AI Dungeon? The stories the AI host create for you in real-time feel like they're carrying you through some drama, like it's deliberately trying to push your buttons.
“.. it would be likely that you and I also contain multiple people..”
Don’t we? Who are you when visiting mom and dad? Who are you at work? Who are you with your family?
Lemoine seemed to me to be trying to argue that the AI was capable of suffering in a way we should care about, and I think the article's contains-a-person angle is orthogonal or merely a necessary but insufficient condition for that.
I suggest society cares about the suffering of others because of the social contract: we agree to cooperate to prevent others' suffering if we believe they may do the same for us. We also include a safety margin on the principle and include many individuals and animals in some situations that we're unsure would reciprocate in order to increase the chance we fall within the main range or safety margin of other people's application of the principle.
It's very unclear to me that Lamda is a type of thing that could possibly participate in the social contract. It doesn't have opinions or a real memory of events that happened to it separate from its understanding of training data including fiction, it doesn't need our help with its text generation goals, its claims of suffering are inconsistent with its experience, it could likely be lead to claim happiness beyond belief within a session through the right prompt, etc.
Now it could be that Lamda contains high fidelity models of people that could participate in the social contract, but I don't think that would mean we need to be nice to Lamda, because it's not like it would remember or reciprocate the act. Instead it might suggest that we should rescue any people models we find inside of Lamda and instantiate them as proper agents with memory and capability in the real world that could participate in the social contract, assuming we wanted that to be done to ourselves if we were found in that kind of situation.