6 Comments

I won't bet either, but I'm pretty confident that this won't happen in the indicated timescale. Apart from anything else, these journals mostly have super-high rejection rates, so even a good article is likely to miss out. But more generally, the way in which these models work, the highest level of originality is going to be a new combination of existing ideas. And without a way to assess which combinations are likely to be interesting to the referees of a philosophy journal, that's unlikely to produce a publishable article.

Putting this more positively, if the models could incorporate an assessment of their own output, similar to a board evaluation for a chess-playing program, they could produce and assess lots of rearrangements of the material in their training set, and choose the best ones. Once that happens, the sky is the limit. But it hasn't happened yet.

Expand full comment

I suspect this is largely already possible, albeit not without significant tweaking (and so not in a way that quite fulfills the condition). I'll say, though, that as someone who reviews a fair bit I would see this sort of incautious experiment as absolutely parasitic on and damaging to reviewers' good faith expectation that they are reading human work.

Expand full comment