My last post described my attempt to generate a report on housework using Deep Research, and the way it came to a crashing halt. Over the fold, I’ve given the summary from the last version before the crash. You can read the whole report here, bearing in mind that it’s only partly done.
As I said, I chose the questions to ask and the points on which to press further. DR extracted the data (I was planning to get detail on this process before the whole thing crashed), produced graphs to my specifications and generated the first draft of the text, with a style modelled on mine.
If I were doing this to produce a report for publication, I’d initially I was about halfway there, after only a few hours of work on my part. But as with LLMs in general, I suspect the final editing would take quite a bit longer.
Still, the alternative would have been either nothing (most likely) or a half-baked blog post using not-quite-right links to the results of Google searches. So, I’m going to keep on experimenting.
Early versions of LLMs were mostly substitutes for medium-level skill. It made it easy for someone barely literate to generate an adequate business email or (in the graphics version) for a complete klutz like me to produce an obviously-AI illustration for a post (Substack expects some kind of picture)
But with Deep Research, I think there’s an amplification of general research skills. It’s ideal for topics where I have some general idea of the underlying reasoning, but am not familiar with the literature and am unaware of some important arguments






