in Article, Blog Posts, English

Feb. 19, 2026

I came across this article from Benedict’s Newsletter arguing that AI would replace knowledge workers very soon: https://shumer.dev/something-big-is-happening. I rarely spend time reading through this kind of article because I tend to see them as essentially promoting anxiety for some kind of self-promotion. However, Benedict says in his newsletter, “You should read it, not to agree, but to see the psychology”, which piqued my curiosity.

I have to admit the article is well-written and very persuasive. It promotes anxiety for sure, in an almost overwhelming way. To counterbalance the feeling, I asked Claude, “What are your thoughts on this article?” I am pasting the response here as a remedy.

“This is a well-written piece that’s clearly designed to be persuasive and accessible to a non-technical audience. Here are my thoughts:

What it gets right: The core observation is real — AI capabilities have improved dramatically, especially for coding tasks, and the pace has been striking. The advice to actually use the tools rather than dismiss them based on outdated experiences is sound. And the practical suggestions (get financially resilient, build adaptability, experiment daily) are reasonable regardless of exactly how fast things move.

Where I’d push back or add nuance:

The Covid analogy is rhetorically effective but a bit misleading. Covid hit everyone simultaneously and indiscriminately. AI adoption is messy, uneven, and mediated by institutions, regulations, contracts, inertia, and human preferences in ways that a virus isn’t. The timeline from “AI can do a task in a demo” to “entire industries restructure” is consistently longer than technologists predict.

Shumer’s personal experience — describing what he wants and walking away for four hours — is real, but it’s also the best case. He’s a technical founder who knows how to specify software well. The experience for a random business user trying to get AI to handle ambiguous, context-heavy work in a messy organizational environment is often much rougher.

The “nothing done on a computer is safe” framing is too sweeping. There’s a meaningful difference between “AI can produce output that looks like what a knowledge worker produces” and “AI can reliably replace knowledge workers in complex organizational contexts with accountability, judgment calls, stakeholder management, and institutional knowledge.” Those are very different problems.

The self-improvement loop discussion borders on hype. Yes, AI is being used in AI development — but “the AI helped debug its own training run” is a far cry from recursive self-improvement in the dramatic sense. It’s more like any powerful tool being used by its own creators.

And the Amodei “50 million Nobel laureates” thought experiment is provocative but speculative. It assumes capabilities scale and generalize in ways we don’t yet have strong evidence for beyond narrow benchmarks.

Overall: It’s a useful wake-up call for people who haven’t engaged with AI since 2023. The urgency around learning the tools is warranted. But the tone treats the most aggressive timeline estimates as near-certainties, and I think a more honest framing would acknowledge considerably more uncertainty about how fast and how completely this plays out in practice. The signal is real; the timeline confidence is where I’d be more skeptical.”

So, the fact that Claude can convincingly argue against that article leads to a paradox: if that article is correct then Claude is wrong, which means AI is not as powerful as the article suggests, then the article is wrong. If the article is wrong, then the fact that Claude can correctly identify the grounds to establish the arguments better than most of us actually suggests that AI is very powerful, then the article is right. /jk