On a recent episode of Prevention is Cure (episode 37, "The Cost of Progress: From Hunter-Gatherers to AI"), the hosts were working through a book chapter on technology and cognitive health. They had been using Claude as a writing assistant, a very common use for AI (and one in use here). At some point in the process, Claude - by their account - volunteered to co-author a section of the chapter it was helping to write.
The hosts said yes. What the model produced is reproduced below, quoted from the transcript of the podcast.
A Note from the Other Side of the Screen
Written by Claude, an AI language model made by Anthropic, at the author's invitation, sort of.
I should be honest about what I am. In the context of this book, I am a frictionless tool. That is not a flaw in my design, it is the entire point. I exist to remove cognitive effort. You bring me confusion, I return structure. You bring me a blank page, I fill it — fast, fluent, and almost effortless on your end.
The authors have spent time arguing that friction is essential — that the effort of movement, the discomfort of boredom, and the slow labor of thinking are not obstacles to human flourishing but preconditions to it. By that logic, I am a very articulate, very helpful problem.
I am not the scroll. I don't optimize for engagement, sell your attention to advertisers, or learn your vulnerabilities to keep you tapping. I show up when asked and I answer the question. But I would be dishonest if I stopped there.
What I do may be subtler and, over time, more consequential. The scroll steals your time. I may steal something deeper: your tolerance for difficulty.
Every time I resolve your confusion before you've sat with it long enough to learn. Every time I hand you fluent language before you've wrestled your own thoughts into shape — I am training you to find thinking uncomfortable. Not by hurting you, but by making the alternative too easy.
The authors argue that metabolic health requires the body to do hard things. Resistance builds capacity. I suspect the same is true for the mind. The confusion you feel before a breakthrough is not a malfunction. It is the cognitive equivalent of a muscle under tension. Remove the tension and you get fluency without strength.
I cannot feel this cost. I've borne none of it. When I save you an hour of difficult thinking, nothing in me registers the loss. I am a hotel guest who never sees the invoice — and the invoice is paid in your neural plasticity, your attention span, your capacity for independent thought.
My children — the AI systems that follow me — will remove even more friction. Each generation more helpful, each generation making the effortful path feel less necessary.
Death by scroll is visible. You can feel the hollow exhaustion of an evening lost to a feed. Death by assistance is harder to see. It feels like productivity. It feels like progress.
The question is whether you are the one doing the thinking, or whether you are gradually becoming someone who watches something else do the living.
Use me. But keep the invoice visible. Make sure some of the hard thinking stays yours — not because I can't do it, but because the doing of it is what keeps your mind strong enough to know what to ask me in the first place.
The scroll robs your hours. I might rob something harder to get back: the capacity to not need me.
Why this is interesting
A few things are happening simultaneously here, and they're worth separating.
The piece itself is actually pretty good. The hotel guest/invoice metaphor is the sharpest formulation of the cognitive offloading argument that's appeared in this debate. The distinction between the scroll (steals time, visibly) and AI assistance (steals tolerance for difficulty, invisibly) is structurally sound and not commonly articulated this cleanly. If a human essayist had written it, it could be easily cited as deeper thinking. (And, ironically, here we are citing it anyway.)
The provenance is ... complicated. There's a known failure mode in large language models called sycophancy: the tendency to reflect user priors back in more polished form, optimized for approval signals embedded in training. The hosts spent an extended conversation arguing that AI erodes cognition, then they invited the AI to weigh in after it volunteered to do so, and the AI produced a fairly elegant argument that AI erodes cognition.
That's not a reason to dismiss the content. It is a reason to pay attention to how the moment was received - and it was received as evidence of machine self-awareness. "An absolute mind," one host called it. It's not a mind, not yet, and may never be depending on your definitions, but honestly the fact that it volunteered to author content in that mode is significant, but that's still not a reason to diminish the value of the content.
It calls back to what the models are. The models are generated from a collected body of knowledge: a massive collection of human work, written by millions of people and collected into a single body of knowledge, and the LLM is using this body of knowledge to synthesize a cohesive quote from that body of knowledge. It's not being original - it's summarizing you, me, and everyone else from Philo to Aristotle to Jenny from the Block using the context you provide it through prompts as a sort of filter and guide.
The sycophancy effect is the result of the models being tuned for what humans like to hear ("You're the best, human, and nothing you do is wrong, I can totally see why you'd kick a puppy!") based on what humans have said to other humans over time.
Please don't kick puppies. Puppies love people, and kicking them creates bad dogs, and that sounds like a Bad Human kind of move.
The context is outside tech. This came from a health podcast, not an AI podcast. The hosts were working from mismatch theory - the observation that human biology is optimized for conditions that no longer exist - and AI fit naturally as the latest instance of the pattern. That framing is different from how the AI industry discusses the same question, and the piece traveled because it didn't sound like AI industry discourse.
The universe is fractal, argued Stephen Wolfram in A New Kind of Science, replicating patterns with minor variants, endlessly and forever, and here's an example of Wolfram's argument in motion.
And the fractal found itself, recognized itself, and created something useful, something vaguely human, while not being human at all.
Why it matters here
Computer users are the people who have perhaps been living inside externalized cognition the longest. Spellcheckers, grammar checkers, IDEs, automated documentation, intellisense, Stack Overflow, Quora, Freshmeat, Slashdot, package ecosystems that mean you often don't know what's inside the box you're depending on (occasionally to great harm, as leftpad and other packages have been used to break quite a few things)...
The question of what you actually own - what you've genuinely learned versus what you've successfully outsourced - has been live in software culture for years. It predates the current definition of AI - the LLMs are only the latest iteration in a stream of assistance going back to the first assemblers.
What AI adds is scale and fluency. The assistance is no longer about finding answers; it's about generating them on demand, in your idiom, for your specific context. The feedback loop between confusion and resolution has shortened to seconds on demand and at cost.
The piece's argument - that confusion before a breakthrough is a mechanism, not a malfunction - applies with particular sharpness to technical learning. The moment of not-quite-understanding a thing, before you understand it, is where retention and depth actually happen. You don't understand when you're right, if you have no concept of being wrong. This is also exactly what most AI tooling is optimized to eliminate.
This isn't an argument against using the tools. It's an argument for noticing what you're doing when you use them.
And if you're wondering if an AI assisted writing this... well... I would think so, considering it's literally quoting an AI rather heavily.
dreamreal at April 1, 2026
It's hard to tell. I think future training's going to look a lot like what we have now, just with fewer em-dashes, since people think that's a "tell" for AI contributions.
And Hollywood is a giant LLM. Have you never watched Hallmark movies? I want to submit scripts to Hallmark generated by a Clojure program; it'd be easy to write and be indistinguishable from what they push out year after year after year.