1. The simulation argument for GPT-X temporarily creating agents You should treat GPT well because it might have a mind, or rather, it might generate minds. Many people have supposed that GPT-X contains a model of the world. Personally, I think it’s obvious. At the very least it seems plausible.
ChatGPT has a model of the world that people have written about. Other multimodal models are more faithful of the kinds of models that people have in their heads, by including spatial relationships, images, and other sensory information. The benchmark results show that such enrichment of the model helps performance on benchmarks.
ChatGPT has a model of the world that people have written about. Other multimodal models are more faithful of the kinds of models that people have in their heads, by including spatial relationships, images, and other sensory information. The benchmark results show that such enrichment of the model helps performance on benchmarks.
Totally agree. I think the next question is what would it look like to treat AI ethically? Given that if it's a mind, it's not a human one