Feb 22, 2026 • 8 min read • EN

On the Deconstruction of Human Value as Productivity

Disclaimer: This piece is about the slope of history, not its current coordinates.


Industrial society has long tied human “value” to productivity. Whoever produces results faster and more efficiently gets a seat at the table. This logic has been remarkably stable for two hundred years — companies run on it, education filters people through it. We’ve been trained since childhood to optimize output per unit of time. But it didn’t just give us jobs. It gave us a narrative about who we are: I am someone who builds things. I am efficient. I am useful.

That framework is starting to crack. For a vast range of digitizable tasks, models already exceed individual humans in speed and scale. Codex and Claude Code are writing the code [2]. Seedance 2.0 generates cinematic video from a single prompt — and Hollywood is sending cease-and-desist letters [3]. OpenClaw runs an AI agent on your computer 24/7, actually doing things for you — monitoring stocks, searching for information, maintaining projects, answering emails; one user’s agent negotiated $4,200 off a car purchase over email while he slept [9]. Claude Cowork knocks out in an hour what used to be a week-long analysis report. All of this “output” is being cost-compressed to near zero. If output is no longer scarce, where do people derive their sense of worth?

For software engineers, post-AGI has already arrived. I will never again feel proud about how many lines of code I wrote in a night. You might say: it still takes an hour to build a simple blog site, let alone anything more complex. But go back to my disclaimer — I’m talking about the slope, not the coordinates. Having watched this unfold over the past three years with my own eyes, here’s what I feel in my gut: software products will soon be like toothpaste. Whichever one is sitting in front of me on the shelf, I’ll just grab it.

Recursive self-improvement is charging full speed ahead. This is no longer a theoretical concept — it is happening. Dario Amodei put it bluntly at Davos: “We would make models that were good at coding and good at AI research, and we would use that to produce the next generation of models and speed it up to create a loop.” [5] In the same conversation, Demis Hassabis said: “It remains to be seen — can that self-improvement loop that we’re all working on — actually close, without a human in the loop.” [6] Boris Cherny, the head of Anthropic’s Claude Code, says he hasn’t written a single line of code by hand in over two months. Across the company, 70–90% of code is AI-generated, and 90% of Claude Code’s own codebase was written by Claude [2]. An OpenAI researcher likewise declared: “100%, I don’t write code anymore” [2]. AI is building the next version of itself.

The time horizon of tasks models can complete is rising exponentially. METR’s evaluations show that in mid-2024, GPT-4o could reliably handle tasks spanning only a few minutes. By late 2025, that had jumped to several hours. The just-released data for Claude Opus 4.6 puts the 50%-time horizon at roughly 14.5 hours — meaning for tasks that would take a skilled human expert close to two full workdays, the model has a coin-flip chance of getting it done [1]. The doubling time on this exponential curve is about four months.

I can ship a complete product fast. My ideas are so abundant they’re completely worthless. So what? The word “builder” has lost most of its meaning. Building something is so cheap today. Calling someone a builder feels no different to me than calling them a human. The thrill of vibe coding fades fast into emptiness — and a persistent anxiety. Every few months AI capabilities jump another level, and the anchor I’d just found for my own value gets ripped out. You’re forced to keep answering the same question over and over: what am I worth? And the answer you just came up with might be completely obsolete in four or five months. Career planning doesn’t exist anymore — you can’t make a five-year plan for a world that resets every four months. You can’t even make a two-year plan. This is not a jobs crisis. This is an identity crisis.

Sure, you can say: humans need to harness AI, it’ll 10x your productivity. Humans bring judgment, taste. But apart from the fact that society still needs humans to be accountable for outcomes — and that is a real gap — will AI’s judgment and taste necessarily be worse than mine? Can I actually harness AI? If I can today, it won’t be long before AI can harness other AI too. This is a rat race. It reminds me of zombie movies where people just keep running, endlessly. Frontend engineers are probably the first ones to get bitten. The people training models in the labs might buy themselves another two or three years. But against the trend, these are vanishingly small differences. A lot of people frame “AI replacing humans” as “AI augmenting people.” I think that’s a comforting lie. When everyone knows the wave is coming and just doesn’t know when it’ll reach them, the result is that nobody invests in the long term anymore. From what I see, the entire industry is optimizing for short-term gain — grab whatever you can right now, because nobody knows whether any of it will still matter in six months.

In his long essay The Adolescence of Technology, Dario Amodei argues [4] that what makes this AI revolution fundamentally different from previous industrial revolutions comes down to two things: it’s so fast that people have no time to adapt or transition — the identity shift from agrarian to industrial society took generations; this one is happening in years — and its cognitive reach is so broad that it starts with the highest-value workers first. Law, finance, tech, consulting — all getting hit at once, leaving no room to switch careers. AI is not a substitute for specific human jobs. It is a general substitute for human labor.

Since late 2025, this has been the first time I’ve truly felt the waterline of technology rising to my neck.

So what does a post-productivity-surplus world look like? What still matters? I think it’s this: the ability to stay curious, stay optimistic, and stay focused. I believe deeply in human resilience. I look forward to seeing how our species, faced with such a massive upheaval of our inner world, creates new forms of value and meaning. That will be a beautiful world. But the road from here to there may be one of the most painful transformations in human history.

When “output” can no longer define you, you’re forced to confront a question most people have never truly answered: what do you actually want? Work used to answer that for you. Now it doesn’t. The agony of choice, the burden of freedom — here they come again.

2026 has to be a year of getting back to basics. Go be in nature. Build real relationships. Move your body. Take care of your health. Return to the things that don’t need to be justified by productivity. Reconnect with the real world.


References

[1] METR, “Task-Completion Time Horizons of Frontier AI Models,” updated Feb 20, 2026. https://metr.org/time-horizons/

[2] Fortune, “Top engineers at Anthropic, OpenAI say AI now writes 100% of their code,” Jan 29, 2026. https://fortune.com/2026/01/29/100-percent-of-code-at-anthropic-and-openai-is-now-ai-written-boris-cherny-roon/

[3] TechCrunch, “Hollywood isn’t happy about the new Seedance 2.0 video generator,” Feb 15, 2026. https://techcrunch.com/2026/02/15/hollywood-isnt-happy-about-the-new-seedance-2-0-video-generator/

[4] Dario Amodei, “The Adolescence of Technology,” Jan 26, 2026. https://www.darioamodei.com/essay/the-adolescence-of-technology

[5] Dario Amodei at World Economic Forum, Davos, Jan 2026. “We would make models that were good at coding and good at AI research, and we would use that to produce the next generation of models and speed it up to create a loop.” https://daveshap.substack.com/p/recursive-self-improvement-is-six

[6] Demis Hassabis at World Economic Forum, Davos, Jan 2026. “It remains to be seen — can that self-improvement loop that we’re all working on — actually close, without a human in the loop.” https://www.foommagazine.org/is-research-into-recursive-self-improvement-becoming-a-safety-hazard/

[7] Dean W. Ball, “On Recursive Self-Improvement (Part I),” The Foundation for American Innovation, Feb 2026. https://www.thefai.org/posts/on-recursive-self-improvement-part-i

[8] Tyler Cowen, “Recursive self-improvement from AI models,” Marginal Revolution, Feb 2026. https://marginalrevolution.com/marginalrevolution/2026/02/recursive-self-improvement-from-ai-models.html

[9] Milvus, “OpenClaw (formerly Clawdbot/Moltbot) Explained: A Complete Guide to the Autonomous AI Agent,” Feb 2026. https://milvus.io/blog/openclaw-formerly-clawdbot-moltbot-explained-a-complete-guide-to-the-autonomous-ai-agent.md

Back to blog