As I approach formal retirement from my academic job, I’m still thinking about ideas in my main theoretical field of decision theory. But I’ve largely lost interest in publishing journal articles, leaving the chore of dealing with Manuscript Central and other robotic systems to my younger co-authors in the case of joint work, and not submitting many of my own. I’ve also gone retro on reviewing. If I’m invited to review a paper, I write back to the editor and offer to do the job as long as they send me the manuscript directly.
That distance from the process provides me with a somewhat different perspective on how Large Language Models (LLMs) are changing things. The rise of LLMs combined with the growth of the global university sector and the dominance of a “publish or perish”[1] has inevitably produced a flood of AI-generated slop which threatens to overwhelm the whole journal process, especially when AI is also being used to generate referee reports.
But will it always be slop? I’ve been trying out various LLMs, including OpenAI Deep Research and, more recently, its French competitor Mistral. I recently used DR to write a piece in the format of a journal article, though I have no plans to submit it anywhere.
[click to continue…]