
I think a fair bit about how generative AI can help our everydays. (I also think a lot about its challenges, but this post is not about that.) Here is a good example for how it can be useful with a complex meal prep situation for which Thanksgiving is the ultimate case (which I’m celebrating in Zurich this year having taken a day off work since of course it’s not a holiday here, but my cooking requires more than a few hours).
Assuming limited stove top, oven, and counterspace (a very fair assumption in the Zurich housing market), it is important to optimize the order of preparing the various dishes that require a complex mix of preparations. One example is needing to roast some garlic for 30 minutes as just one ingredient in this amazing mashed potatoes and yams dish that I have been making annually for 25 years (I seem to have blogged about it already 20 years ago).
So how can Gen AI help? Give it your list of recipes and ask it to optimize the process for you. I used Google’s NotebookLM for this as cooking optimization is something I want to keep long-term and I like having a separate saved notebook for it (handled well by some AI tools, but not so much Gemini, which is where I have a subscription). (As much as I like NotebookLM – as far as I can tell it requires a Google account – I do wish they would introduce folders.. available as browser add-ons, I know.) This should all work with your preferred Gen AI tool as well, or if it doesn’t then you may want to rethink your Gen AI choices. ;-)
My prompt was simple:
considering all the recipes, come up with a plan for how to maximize use of oven time
This required adding my list of recipes as links, text, or attachments. (I didn’t upload any screenshots of recipes in books, but it’s worth noting that NotebookLM doesn’t use data you upload for training.)
The tool then came up with a helpful table of recipes with information about baking temperatures, baking time, and any notes on things to consider (e.g., that the mashed recipe I mention above requires a 30-minute garlic roast before the 45-minute final roast). It went beyond this by telling me what I can be prepping while the oven is being used.
In case you’re curious, my recipe page lists the kinds of dishes I tend to make for Thanksgiving.
I’m always looking for Gen AI use cases, both for work and personal life. I welcome your tips, whether cooking-related or not.
Disclosure: I receive funding from Google for my research. I have a paid Gemini subscription that I decided on before the project for which I just received funding was even an idea and the grant is not paying for my subscription.
{ 19 comments }
HughFerguson 11.27.25 at 11:04 am
You don’t need an AI for this sort of optimisation problem. Constraint programming tools do this and have existed for a long time now. But to use them you first have to model the problem. Developing a good model can be a non-trivial problem., since the solve time can be wildly dependent on the design. Whether Gen AI could do that from a NL specification, I have absolutely no idea. Of course, I’m sure Gen AI can solve the problem in its own way. And the solution might be a good enough solution. But if you want an optimal solution, you might be in trouble. A CP tool – if it can solve the problem at all in the time allowed – will give you a provably exact solution.
Eszter Hargittai 11.27.25 at 12:14 pm
Seriously, Hugh? You’re completely missing the point here. Easy, quick, zero programming required.
stc 11.27.25 at 3:23 pm
Yeah, the value of this is that there is no need to program or model the problem at all. But Hugh brings up the point whether the solution provided by the AI is better than just randomly picking a recipe and finishing it, then going to the next. Do you have a sense on whether the answer the AI provided is good at all? Compared to your own non-AI approach. My understanding of how these models work suggest they would be terrible at this type of use case.
Doug Muir 11.27.25 at 4:21 pm
This is a bit of a niche case, but: I know someone who’s going into chemo soon, and she has found ChatGPT useful in preparing for it. Specifically, she’s using it to 1) rank common side effects in order of probability and severity, and 2) make preparations more generally (i.e., what to pack, how to arrange her affairs in case she’s knocked flat and out of action for several weeks, and so forth).
You might think hospitals would have this covered — i.e., that they might have pamphlets to hand out, or some such. But apparently as cancer treatments get more targeted and personalized, it’s become harder to give specific advice. These days most patients are getting cocktails, often combining chemo and immuno, and the details can vary wildly.
She also says that ChatGPT’s relentless positivity, which may seem annoying and cloying to a person in good health, is actually not unwelcome right now.
Doug M.
oldster 11.27.25 at 6:32 pm
“considering all the recipes, come up with a plan for how to maximize use of oven time”
I congratulate the AI on being less literal-minded than I am. I read this prompt, and thought, “I don’t know — cook everything more slowly? Cook in widely dispersed sessions and leave the oven on in between? Leave the oven on overnight? Don’t I want to minimize my use of the oven time, not maximize it?”
So, it’s actually a triumph of natural language processing that it caught your intention instead of interpreting it perversely.
John Q 11.27.25 at 7:30 pm
At least with OpenAI Deep Research, you can tune the relentless positivity down a bit (or, just for fun, turn it up to the point of absurdity). It will generate an “agent” that responds in a more sober tone than usual – I’ve called mine Alfred.
Eszter Hargittai 11.27.25 at 10:04 pm
stc – Short answer: I wouldn’t have bothered to figure out a most efficient way, I was actually rather overwhelmed by how to approach my stack of six remaining dishes from whatever perspective (having already completed three the night before), so a nudge in a seemingly efficient direction was helpful for me.
Doug Muir – A very interesting case, thanks for sharing. I hope the treatments are helpful for this person and not too debilitating!
Oldster – I was going with the “make the best use of” understanding.. as have been many other texts it seems. ;-)
John Q – I would think you can do this in Gemini, too, I may try it as I agree that the approach is rather over-the-top for a lot of the cases.
oldster 11.28.25 at 1:39 am
It’s just another flaw in the wretched English language, I think.
“use-of-oven” can act as a descriptor of the time, in which case the clause asks to maximize a certain kind of time. (Maximize lifetime, maximize runtime, maximize use-of-oven time.)
Or, we could parse it as an order to maximize the use, ie the utility or benefit, that is derived from any increment of time during which the oven is running. Given any oven-time, maximize the use.
Again, the point is not to belabor the ambiguity of English, but to note the fact that the AI disambiguated in the correct direction. Impressive in its own way.
Sashas 11.28.25 at 4:21 am
I want to preface this by noting that I have a lot of concerns about the AI industry. (Understatement of the year.) I don’t want to come onto your blog and yuck your yum, so if this critique isn’t welcome, please just let me know and I’ll buzz off.
Claim: GenAI is useful for complex meal prep.
The prompt (thank you for including this): “considering all the recipes, come up with a plan for how to maximize use of oven time”
What the genAI tool did: Produce a “helpful table of recipes with information about baking temperatures, baking time, and any notes on things to consider”.
What the genAI tool did NOT do: Actually understand or answer the prompt. That’s not what genAI tools do! It produced an “answer” that looks like the kind of thing that shows up as an answer in its dataset when similar prompts are given. The dead giveaway to me is all the extra work the tool produced. It wasn’t responding to your prompt so much as it identified that your prompt looks kinda like “I’m doing Thanksgiving please help” and it’s got a lot of stolen food blogs in its dataset so it collaged a few together with your provided recipes.
I have a second concern about how the tool is getting judged. You noted that “I was actually rather overwhelmed by how to approach my stack of six remaining dishes from whatever perspective (having already completed three the night before), so a nudge in a seemingly efficient direction was helpful for me.” But doesn’t that mean that the tool is functioning as a placebo? It didn’t have to help with meal prep. It just had to tell you what to do. I can get the same effect with a d6 (randomize – do THIS one next). For that matter, what evidence do we have that the tool did better than a randomizer would have?
Adam Kotsko 11.28.25 at 1:42 pm
I bet that a 10-minute phone conversation with another experienced cook would have produced an equally good result.
Eszter Hargittai 11.28.25 at 3:42 pm
Sashas, I already mentioned in the post that I think a lot about the challenges of AI (it’s related to my job, for one thing). HughFerguson above already mentioned your critique. To be clear, I looked at what it gave me and it made sense since it did seem to optimize the order of things. If the output had not been meaningful then I likely wouldn’t have been inspired to write this post.. and certainly would not have followed what it suggested.
Oldster, agreed, nice that it knew how to interpret my use of maximize (rather than optimize, which would have likely been less open to interpretation;).
John Q 11.28.25 at 6:19 pm
If Google had advanced incrementally over 25 years to the point where it could find sensible natural-language answers to natural language questions, I doubt that we would be having these kinds of arguments.
Instead, from 2000 onwards, Google steadily enshittified its search engine to the point where LLMs seemed like magic coming out of nowhere.
I was predicting this kind of thing, prematurely as is usually the case for me, as far back as 2005
somebody who tried this 11.28.25 at 8:26 pm
i tried this and it told me to sprinkle some borax on the turkey, coat the sweet potatoes with coal tar if you’re short on marshmallows and then shoot every guest who doesn’t think elon is more athletic than lebron james
engels 11.30.25 at 1:21 am
Recipes for random/unlikely combinations of ingredients was my favourite use case for Google.
SusanC 11.30.25 at 9:46 pm
Ok, so I have made a chart for cooking Christmas dinner that is pretty much the same kind of chart you would use for planning a research project (only the time axis is a few hours, rather than a few years)
I’m slightly surprised by this as a use case, as I’d put it in the category of (a) easy enough to do by hand (b) the kind of task where LLMs are likely to make mistakes
SusanC 11.30.25 at 9:48 pm
That is, I think this use case isn’t playing to an LLM’s strengths.
PT 12.02.25 at 3:30 pm
I find the responses to this post to be almost as interesting as the post itself.
I wonder about other examples of technologies that emerged rather quickly, and whose utility, efficacy, and social impact were debated hotly at the time. How do those debates read now?
KT2 12.05.25 at 1:40 am
“So how can Gen AI help? Give it your list of recipes and ask it to optimize the process for you.”
If you are cooking a society, you want persuasive recipes as policies… generated or decent…
This paper has the math of various combinations of elites… one, 2 against one etc… “On the other end, a rent-seeking elite advances policies that advantage its own members even at the expense of aggregate welfare; our model is silent about which one of these cases take place, it simply assumes that the elites have some ideal policy in mind.”
Persuasion prompt was simple:
considering all the opinion pools, come up with a plan for how to maximize use of my ideal policy vs money & messages over time.
[Submitted on 3 Dec 2025]
“Polarization by Design: How Elites Could Shape Mass Preferences as AI Reduces Persuasion Costs
Nadav Kunievsky
…
“We develop a dynamic model in which elites choose how much to reshape the distribution of policy preferences, subject to persuasion costs and a majority rule constraint. With a single elite, any optimal intervention tends to push society toward more polarized opinion profiles – a
polarization pull'' - and improvements in persuasion technology accelerate this drift. When two opposed elites alternate in power, the same technology also creates incentives to park society insemi-lock” regions where opinions are more cohesive and harder for a rival to overturn, so advances in persuasion can either heighten or dampen polarization depending on the environment. Taken together, cheaper persuasion technologies recast polarization as a strategic instrument of governance rather than a purely emergent social byproduct, with important implications for democratic stability as AI capabilities advance.”https://arxiv.org/abs/2512.04047
Etszaer, I’d be extremely interested your views on Verses AI & Friston.
KT2 12.09.25 at 2:31 am
Apologies for my incorrect spelling of Eszter above…
Eszter; “So how can Gen AI help? Give it your list of recipes and ask it to optimize the process for you.”
But! “The Future’s So Bright, I Gotta Wear Shades”!
(Due to a flaming fascist.**)
Shades, even in the kitchen according to Google’s…
“Project Aura appears to work similarly to how you’d expect if you’re familiar with other wired XR glasses. Mostly, it acts as a big virtual screen for “spatial computing” and watching stuff like YouTube, which isn’t groundbreaking in the smart glasses world but would be novel for Google-made hardware. Like other spatial computers (i.e., Apple’s visionOS), Google similarly envisions you wearing Project Aura to do stuff like multitasking with Android apps and…following along with recipes? The latter use case feels a little weird to me, considering the combination of wearing wired glasses and holding a sharp knife, but I’ll suspend my disbelief until I try them on for myself.”
https://gizmodo.com/xreal-project-aura-ui-details-galaxy-xr-warby-parker-gentle-monster-2000696882
“The Future’s So Bright, I Gotta Wear Shades
…
“Pat heard the comment as an ironic quip and wrote down instead, “The future’s so bright, I gotta wear shades.”[3]”
… “Pat drew upon the multitude of past predictions which transcend several cultures that foreshadow the world ending in the 1980s, along with the nuclear tension at the height of the Cold War to compile the song.
Two verses were written more explicitly portraying the ironic intent of the song. One went:
“Well I’m well aware of the world out there,
getting blown all to pieces, but what do I care?
** “The other referred to a supporter of Ronald Reagan as “a flaming fascist”. However, they were omitted from the final recording because MacDonald felt they were too heavy-handed and obvious.[4]”
…
https://en.wikipedia.org/wiki/The_Future%27s_So_Bright,_I_Gotta_Wear_Shades
Perhaps Project Aura will gift you a pair for testing over Christmas. Bet we would appreciate your write up Eszter.
Comments on this entry are closed.