6 Comments
User's avatar
John Quiggin's avatar

I won't bet either, but I'm pretty confident that this won't happen in the indicated timescale. Apart from anything else, these journals mostly have super-high rejection rates, so even a good article is likely to miss out. But more generally, the way in which these models work, the highest level of originality is going to be a new combination of existing ideas. And without a way to assess which combinations are likely to be interesting to the referees of a philosophy journal, that's unlikely to produce a publishable article.

Putting this more positively, if the models could incorporate an assessment of their own output, similar to a board evaluation for a chess-playing program, they could produce and assess lots of rearrangements of the material in their training set, and choose the best ones. Once that happens, the sky is the limit. But it hasn't happened yet.

Expand full comment
Ira Allen's avatar

I suspect this is largely already possible, albeit not without significant tweaking (and so not in a way that quite fulfills the condition). I'll say, though, that as someone who reviews a fair bit I would see this sort of incautious experiment as absolutely parasitic on and damaging to reviewers' good faith expectation that they are reading human work.

Expand full comment
Philosophy bear's avatar

I thought about this, my sense is that there is great public interest value in this kind of demonstration, particularly from my point of view, as I feel many academics are complacent about the dangers of AI, and this could serve as a wakeup call. I also feel that many people have convinced themselves that feats like this will not happen, because of fundamental limits they believe the current model of AI faces (sometimes framed with respect to 'true creativity' or 'true originality'). Challenging this confidence, and encouraging people to think more concretely in their claims about the fundamental limits of LLM's seems valuable to me. This exercise in particular feels like it might help in mobilising the thus far complacent professional class against unrestricted AI research even outside academia- e.g. lawyers, doctors etc.

Expand full comment
Ira Allen's avatar

I hear you, but I think you're wrong about the tradeoffs. Perhaps you do a fair bit of peer reviewing and find them acceptable--I certainly don't mean to suppose my own experience universal. From my vantagepoint, though (as reviewer for a number of journals, ed board member, and occasional guest editor), the peer review ecosystem is not in such vibrant good health that encouraging people to make it worse by submitting (more) AI in ways good-faith readers for journals will inevitably feel hoaxed by is wise. I completely agree with you about the misguided (and, it seems to me, rather desperate) AI deflationism a lot of academics devote a lot of energy to producing and patting one another on the back for (almost as bad as the "this is great" refusal to be honest at all about how it scales coming out of more ed tech types). People should be shaken a bit. It's just that my sense is that this sort of shaking will tend to produce unintended consequences for an ecosystem that is already--far more than most of the deflationists realize--significantly threatened by AI. If one wishes to help that ecosystem continue staggering along, this sort of experiment is probably not the way to do it.

Expand full comment
anzabannanna's avatar

Maybe this institution should be put out to pasture if it can't cut it. If the brains in the institution are that powerful, they should use them to create methodologies adequate to handle the load.

Expand full comment