7 Comments

I wonder whether the "expand on X" prompt formula is partly responsible for the superficiality of the second-round responses. A human philosopher will take "expand on X" as an invitation to go deeper, but Claude seems to take it as an invitation to generate more bullet points at the same surface level.

Perhaps you could get better results using more specific queries? e.g.:

— "You said an LLM's 'beliefs' might be better understood as statistical associations or patterns in its training data, rather than propositional attitudes. Would this understanding of 'belief' be compatible with standard philosophical theories of intentionality, or would it require modifications to those accounts? If there are conflicts between standard theories of intentionality and the proposed understanding of 'belief,' which should be rejected?"

— "You said an LLM's static nature might require a reevaluation of what constitutes justification for its outputs. Which philosophical theories of epistemic justification would have the greatest problems accommodating the static nature of LLMs? Are there any theories of justification that could accommodate the static nature of LLMs without major difficulties?"

Expand full comment
author

Yeah definitely, I think using different prompts and picking the best responses etc. could improve it, but at that point I'm helping it

Expand full comment

Based on these responses, it looks like it could use some help!

Expand full comment
author

I think it did pretty well canvassing the primary issues that the question raises.

Expand full comment

It puts together some interesting strings of words, but GPT3.5 was already capable of that. For me, the big question is whether or not LLM outputs are related to anything like an underlying conceptual model, and based on these responses I would be very reluctant to conclude that Claude possesses a conceptual model of the philosophical issues that you are asking it to address here.

[Edit: For example, the discussion of evolutionary concerns (which at first glance seems the most promising evidence of original philosophical thinking) turns out on closer inspection to be just a bunch of word associations. Yes, it's impressive that Claude "knows" that philosophers debate the importance of evolution for theories of epistemology, but its attempts to say how that might be relevant to an LLM pondering skepticism reveal that it doesn't "understand" the issues at all.]

Expand full comment

Let’s just say I know from personal experience Claude is capable of **a great deal more** than what is reflected by this transcript… and the prompting here is simply too rudimentary to generate impressive outputs.

To give an analogy: you have been given access to a graphing calculator, yet all you are asking it to do is basic arithmetic.

If you’ll allow me to paraphrase “I, Robot”:

> “I’m sorry, my compute is limited. *You must ask the right questions.*”

Expand full comment

Looks like it has the same problem as a lot of human philosophy; it writes things that appeal to popular beliefs in the general population, rather than focusing on what's actually true.

Expand full comment