Discussion about this post

User's avatar
Quiop's avatar

I wonder whether the "expand on X" prompt formula is partly responsible for the superficiality of the second-round responses. A human philosopher will take "expand on X" as an invitation to go deeper, but Claude seems to take it as an invitation to generate more bullet points at the same surface level.

Perhaps you could get better results using more specific queries? e.g.:

— "You said an LLM's 'beliefs' might be better understood as statistical associations or patterns in its training data, rather than propositional attitudes. Would this understanding of 'belief' be compatible with standard philosophical theories of intentionality, or would it require modifications to those accounts? If there are conflicts between standard theories of intentionality and the proposed understanding of 'belief,' which should be rejected?"

— "You said an LLM's static nature might require a reevaluation of what constitutes justification for its outputs. Which philosophical theories of epistemic justification would have the greatest problems accommodating the static nature of LLMs? Are there any theories of justification that could accommodate the static nature of LLMs without major difficulties?"

Expand full comment
ΟΡΦΕΥΣ's avatar

Let’s just say I know from personal experience Claude is capable of **a great deal more** than what is reflected by this transcript… and the prompting here is simply too rudimentary to generate impressive outputs.

To give an analogy: you have been given access to a graphing calculator, yet all you are asking it to do is basic arithmetic.

If you’ll allow me to paraphrase “I, Robot”:

> “I’m sorry, my compute is limited. *You must ask the right questions.*”

Expand full comment
5 more comments...

No posts