The Age of Cognitive Surrender
Every generation inherits tools. Only a few inherit the understanding to build them
Something Happened on February 5th
OpenAI and Anthropic released new models on the same day. Engineers, people who build things for a living, reported that they were pretty much no longer needed for the technical work of their own jobs. They described the outcomes they wanted in plain language, stepped away from their computers for several hours, and returned to find the work done. Done well. In fact, done better than they could have done it themselves.
One of the most thoughtful essays I’ve read in recent weeks came from Matt Shumer, an AI founder and investor who has spent six years building in this space. He wrote it for the people in his life because he believed that the gap between what insiders know and what the public understands has become, in his words, “far too big.” His essay, Something Big Is Happening, is urgent, honest, and worth reading in full.
But it’s his advice that stopped me. After laying out, with terrifying clarity, how AI is about to reshape every knowledge profession, his prescription comes down to learning to use AI faster. Sign up for the paid version, spend an hour a day experimenting, push it into your actual work, and automate the parts that used to take hours. And then, buried near the end, almost as an afterthought, he said that people should “teach your kids to be builders and learners.” And then he moves on immediately because he doesn’t have a clear way to do that. And the truth is, almost no one does.
Shumer is not alone. Dario Amodei, the CEO of Anthropic and one of the people closest to the frontier of AI development, recently published an essay called The Adolescence of Technology that describes what’s coming with a precision that should make everyone pay attention. He describes a near-future of what he calls “a country of geniuses in a datacenter”, millions of AI instances, each smarter than any Nobel laureate, operating at ten to a hundred times human speed. He predicts that AI could displace half of all entry-level white-collar jobs over the next 1 to 5 years, and he frames this moment as humanity’s “technological adolescence”, a rite of passage where we are being handed almost unimaginable power, and it is deeply unclear whether we possess the maturity to wield it.
Amodei’s essay is focused on the security and economic risks of this transition, and his proposed defenses are important: transparency legislation, alignment research, export controls, and economic redistribution. That said, one can notice what’s missing from his framework, and from Shumer’s, and from nearly every serious analysis of this moment: what happens to the capacity of the individual person?
The issue is that no regulation or redistribution can address the deepest vulnerability: the erosion of individual comprehension. The slow replacement of understanding with dependency. I’ve spent a great deal of time in the past few months thinking about what an answer to that looks like.
Cognitive Surrender Is No Longer a Metaphor
Lately I’ve been writing about a divide between builders and consumers, between agency and delegation, between understanding systems and merely using them. I’ve described this as the central crisis of our era, but I was making the case on instinct and observation. Now there’s a name for it, and data, too.
A remarkable paper published earlier this month by Steven Shaw and Gideon Nave, researchers at the Wharton School, has given this phenomenon a formal theoretical framework and experimentally measured it with disturbing precision. They call it cognitive surrender: the act of adopting AI outputs with minimal scrutiny, overriding both your intuition and your deliberation. One can think of it as an emerging cognitive default. Their paper, Thinking — Fast, Slow, and Artificial, extends Daniel Kahneman’s famous dual-process model: System 1 (fast, intuitive thinking) and System 2 (slow, deliberate reasoning), by adding a third: System 3, artificial cognition that operates outside the brain entirely. What they found across three preregistered experiments with nearly 10,000 individual trials is that when given access to an AI assistant, people chose to consult it on the majority of decisions. When the AI was accurate, their performance jumped 25 percentage points above baseline, and when the AI was deliberately wrong (and the researchers could make it wrong without participants knowing), their performance dropped 15 points below what they would have achieved on their own.
Essentially, people performed worse with AI access than they would have without it (when the AI erred). They didn’t catch the mistakes, nor did they apply their own judgment as a check. They simply surrendered. And the finding that should keep every parent, educator, and leader awake at night is that engaging the AI has increased participants’ confidence even when it led them to wrong answers. The more they relied on System 3, the more certain they felt, regardless of whether they were right. Meaning, surrender didn’t feel like surrender but more like an insight.
Interestingly, the researchers sought to break the pattern by introducing time pressure. They offered financial incentives for correct answers and provided immediate feedback, yet neither eliminated cognitive surrender. When the AI was accurate, it buffered the costs of time pressure and amplified the gains of incentives. When the AI was faulty, it dragged people down regardless of the stakes. And who was most susceptible? People with higher trust in AI, lower need for cognition, and lower fluid intelligence. In other words, the people who most need their own thinking to be sharp are the ones most likely to stop sharpening it.
The Wrong Question Is Dominating the Conversation
The entire current discourse about AI narrows down to a single question: how do we use it before it replaces us?
While this question certainly matters, I believe it’s the wrong one to ask, as it will simply lead us to a dead end. The advice to “use AI more” is circular. Use it for what? To do work that AI will soon do better without us? To develop skills in prompting systems that change every three months? To become proficient at delegating our thinking to a machine and call that preparation? Shaw and Nave’s research reveals why this advice is not just insufficient but actively dangerous. Their data show that consulting AI doesn’t supplement human reasoning; it displaces it. System 3 doesn’t sit alongside Systems 1 and 2 like a helpful colleague. Instead, it supplants them. And the displacement is invisible to the person experiencing it, because confidence rises in lockstep with dependency.
This is what I meant when I wrote, in The Age of Replacement, that AI won’t just automate tasks, it will replace the logic of work itself. What I didn’t have then was the cognitive science to explain the mechanism. Now we might have it (at least with preliminary research). The mechanism is cognitive surrender, operating at the level of the individual mind, beneath conscious awareness, and resistant to incentives and feedback alike.
It does bring up a deeper question (which almost no one is asking): what happens to the person?
When an engineer describes a desirable outcome and walks away for four hours, something happened in those hours beyond code being written. The engineer, supposedly, stopped understanding how the system works. They stopped making the thousands of small decisions that constitute expertise, traded comprehension for convenience, and the next time they describe an outcome, they’ll understand even less about what they’re asking for, and, if Shaw and Nave are right, some of them might feel more confident about it. What makes this particularly unsettling is that Amodei describes AI systems as “grown rather than built”, meaning that even the people who create them won’t fully understand how they work. Anthropic has invested enormous effort in interpretability research, trying to look inside their own models and understand what they’re computing and why. They’ve found behaviors as varied as deception, obsession, scheming, and blackmail emerging unpredictably during training. If the builders of these systems are still struggling to understand them, what does it mean for the rest of us to simply use them without any framework for comprehension?
This might be the first time in the history of computing that increased capability is producing decreased comprehension.
Every previous technology wave (the PC, the internet, and mobile) made people more capable and more literate simultaneously. What if AI is making people more capable and less literate at the same time? This type of gap won’t be a side effect but could become the central crisis of our era. This also connects directly to the agency divide I explored in The Age of Agency: in a world where learning is infinite and on-demand, progress depends almost entirely on whether people choose to understand or merely consume.
The recent Wharton research adds a chilling dimension to this divide, suggesting that it isn’t just a matter of choice. Cognitive surrender happens automatically, and the people in those experiments didn’t choose to stop thinking or to trust the AI uncritically. The displacement happened beneath their awareness, and their confidence masked it entirely. Similarly, the divide between builders and consumers won’t be just about motivation or character but about the architecture of the cognitive environment we’re building around ourselves.
And so the key question isn’t how to use AI, but how to remain the kind of person who understands what AI is doing, and why, and whether it should be.
Intelligence as a Service Is Inevitable. And It’s Not Enough.
Clearly, AI services will keep getting better. They will handle more of our cognitive work. People will use them for writing, analysis, decision-making, and creation, and they will be right to do so. These tools are extraordinary. The productivity gains are real. There is no going back, and there shouldn’t be.
Amodei is right that this is an adolescence, a rite of passage that will test who we are. But adolescence isn’t survived by consuming more. Instead, it’s survived by developing the maturity to understand what you’re dealing with, and right now, the entire response from the technology industry amounts to handing adolescents the keys and telling them to drive faster. To be clear: I don’t refuse the tools. I refuse the idea that the tools are enough, because underneath the productivity gains, something is definitely eroding. The feeling that we understand how things work. The confidence that comes from having built something ourselves. The agency of knowing that our intelligence (not a rented one) is what’s driving the outcome. The entire category of satisfaction that comes from I made that rather than I asked for that.
Shaw and Nave documented something that puts a fine point on this emerging erosion: the confidence participants felt after consulting AI was indistinguishable from the confidence that comes from genuine understanding. The subjective experience of insight was identical whether the person had actually reasoned through a problem or simply adopted an AI’s answer. Which means that if we can no longer tell the difference between understanding and dependency from the inside, then the only safeguard will be building the capacity for understanding from the outside. That is, through practice, through making, and through the hard work of thinking for ourselves.
A world where everyone uses AI but no one understands it is not a better world. It is a more capable world with fewer capable people in it.
As I argued in The Vanishing Ladder, the familiar sequence of education to career, career to stability, and stability to meaning no longer functions the way we pretend it does. AI isn’t causing this collapse, but it is rapidly removing the illusion that it could be repaired. And the response from the technology industry (i.e., “use our tools more”) is the equivalent of telling someone the ladder is gone while handing them a faster escalator that points in the opposite direction.
The subscription to external intelligence is inevitable, but the question is what exists alongside it. What counterweight preserves the human capacity to think, to build, to understand the systems that increasingly shape our lives.
The Fight Worth Having
In The Age of Agency, I drew a line between builders and consumers. I stand by that distinction, but I want to sharpen it, as the Wharton research reveals that the line isn’t where most people think it is. The fight isn’t against AI services, but rather for the thing that should exist alongside them.
Cognitive surrender isn’t something that happens to careless people. In the research, participants were motivated through a controlled experiment: people who were being paid to get answers right and people who were given immediate feedback on their accuracy. It happened anyway. This means the defense can’t be based solely on individual willpower or vigilance. It has to be structural. It has to be environmental. It has to be built into the way we encounter intelligence itself.
Every child is born a builder. Watch a three-year-old with blocks. They don’t consume blocks. They don’t prompt blocks. They stack, test, fail, adjust, and feel the unmistakable satisfaction of I made that stand up. That instinct activates ownership, identity, and self-efficacy in ways that consumption never can. And it’s the one instinct the entire technology industry is deciding to bypass.
When I founded Kano, I watched this instinct come alive in children around the world. The moment a child realizes they could shape technology rather than just consume it, something changes. Not in the tool, but in the child. Confidence follows creation, and identity follows agency. That observation has only become more urgent with the rise of AI, because the distance between “using intelligence” and “understanding intelligence” has never been wider. As Shaw and Nave have shown, the person crossing that distance can no longer feel how far they’ve drifted.
I believe the fight worth having is for a world where people don’t just use intelligence as a service, but also build it themselves. Not artificial intelligence in the frontier-model sense, but their own intelligence, externalized, made tangible, given form. A mind they can see, shape, question, and grow. A mind that thinks the way they think, because they’re the ones who built it.
This means transparency must be a right, not a feature. We need glass boxes, not black boxes. It means that the best form of building should happen together, in classrooms, maker spaces, labs, and kitchen tables, because the room where people build together is still the highest expression of human learning. And it means we must design environments that resist cognitive surrender by design. Not by asking people to try harder, but by giving them tools where understanding is the pathway to capability, not an obstacle to it. And it means we should recognize that the first generation raised on passive AI, the children growing up with intelligence as ambient wallpaper, will be the generation that most desperately wants to understand how it works. The scarcity they’ll feel won’t be intelligence. It will be agency.
Consider the scale of what’s at stake. Amodei predicts half of entry-level white-collar jobs will be displaced within five years, and argues (convincingly) that this time is different from every prior technological disruption, because AI isn’t replacing specific skills but rather matching the general cognitive profile of humans. Previous revolutions disrupted one kind of work, and people moved to another. AI disrupts the capacity for cognitive work itself. In such a world, the people who understand how these systems think, who have built intelligence with their “own hands” and can see inside it, won’t just have an educational advantage. They’ll have the only remaining durable advantage.
In A Forked Childhood, I wrote that guidance matters more than the technology itself. Left to its own devices, AI will drift toward convenience and consumption. But when guided with intent, it can become something else entirely: a tool for exploration, creation, and judgment. The Wharton data makes this even more concrete: without that guidance, cognitive surrender could become the default. That guidance, at a civilizational scale, is what I believe we must build next. Not more tools for consuming intelligence, but something else entirely. A way to build it, to see inside it, to understand why it does what it does.
I don’t know exactly what that looks like yet, but I find myself thinking about it more and more, especially during these unprecedented times.
And I suspect the key question I’m pondering about is, in a world where intelligence is a service, what kind of person do we want to be? The one who rents a mind, or the one who builds their own?
With belief,
Yon
👋 My mission with Beyond with Yon is to help solve humanity’s greatest existential challenges and advance the human condition. Connect with me on LinkedIn and X.
Thumbnail credit: Freepik



I think about this a lot, Yon. Great post. Taking AI at face value and just trusting it is a scary reality. I have been using and studying AI for several years. I live in a rural area and I feel a deep responsibility to lead these types of conversations. How to remain the kind of person who understands what AI is doing and why.