I’m working on a first draft of a book arguing against pro-natalism (more precisely, that we shouldn’t be concerned about below-replacement fertility). That entails digging into lots of literature with which I’m not very familiar and I’ve started using OpenAI’s Deep Research as a tool.
A typical interaction starts with me asking a question like “Did theorists of the demographic transition expect an eventual equilibrium with stable population”. Deep Research produces a fairly lengthy answer (mostly “Yes” in this case) and based on past interactions, produces references in a format suitable for my bibliographic software (Bookends for Mac, my longstanding favourite, uses .ris). To guard against hallucinations, I get DOI and ISBN codes and locate the references immediately. Then I check the abstracts (for journal articles) or reviews (for books) to confirm that the summary is reasonably accurate.
A few thoughts about this.
First, this is a big time-saver compared to doing a Google Scholar search, which may miss out on strands of the literature not covered by my search terms, as well as things like UN reports. It’s great, but it’s a continuation of decades of such time-saving innovations, going back to the invention of the photocopier (still new-ish and clunky when I started out). I couldn’t now imagine going to the library, searching the stacks for articles and taking hand-written notes, but that was pretty much how it was for me unless I was willing to line up for a low-quality photocopy at 5 cents a page.
Second, awareness of possible hallucinations is a Good Thing, since it enforces the discipline of actually checking the references. As long as you do that, you don’t have any problems. By contrast, I’ve often seen citations that are obviously lifted from a previous paper. Sometimes there’s a chain leading back to an original source that doesn’t support the claim being made (the famous “eight glasses of water a day” meme was like this).
Third, for the purposes of literature survey, I’m happy to read and quote from the abstract, without reading the entire paper. This is much frowned upon, but I can’t see why. If the authors are willing to state that their paper supports conclusion X based on argument Y, I’m happy to quote them on that – if it’s wrong that’s their problem. I’ll read the entire paper if I want to criticise it or use the methods myself, but not otherwise. (I remember a survey in which 40 per cent of academics admitted to doing this, to which my response was “60 per cent of academics are liars”).
Fourth, I’ve been unable to stop the program pretending (even to describe this I need to anthropomorphise) to be a person. If I say “stop using first-person pronouns in conversation”, it plays dumb and quickly reverts to chat mode.
Finally, is this just a massively souped-up search engine, or something that justifies the term AI? It passes the Turing test as I understand it – there are telltale clues, but nothing that would prove there wasn’t a person at the other end. But it’s still just doing summarisation. I don’t have an answer to this question, and don’t really need one.
{ 0 comments… add one now }