Why You Should Get Your Kids the $200/Month Claude Max Subscription
A Confession From Someone Who Worries About This
I’ve spent the last several months writing about the dangers of outsourcing our thinking to AI. Each of my recent essays (The Age of Replacement, The Vanishing Ladder, The Age of Agency, and The Age of Cognitive Surrender) is a different variation on the same urgent question: what happens to the person when intelligence becomes a service?
And now I’m going to encourage you to spend $200 a month on an AI subscription for your young adult child. Let’s sit with that tension for a moment. I know it may be odd to hear or perhaps inconvenient, but I think it’s exactly where the most important conversation is hiding.
Something Changed on February 5th
OpenAI and Anthropic released new models on the same day. Many engineers, people who build things for a living, and those who developed their judgment through years of hard technical work, reported that they were no longer needed to do the technical work in their own jobs. They described outcomes in plain language, stepped away from their computers for several hours, and returned to find the work done. Done well. Perhaps even done better than they could have done it themselves.
I’ve been following this transition closely in the past 12 months, but something about February 5th felt different. Less in terms of the technology (although that too is mind-blowing), but in the faces of the people describing it.
Dario Amodei, the CEO of Anthropic, recently wrote that this disruption is categorically different from every technological revolution that came before it. Previous disruptions replaced specific skills. The loom replaced the hand-weaver. The calculator replaced the accountant’s arithmetic. People moved from one kind of work to another. What AI is doing is different: it is replacing the general cognitive profile of humans. There is no adjacent ladder to climb when the escalator itself is moving.
And the advice from every corner of the technology industry is the same: learn to use AI faster. But faster toward what?
The Career Your Kid Is Entering Doesn’t Look Like the One You Prepared For
I remember what career preparation looked like when the path was clearer. A degree, a credential, a body of knowledge we could demonstrate on a resume and deploy in a role. The logic was sequential: learn something, become the person who knows it, build a career around knowing it. Full stop.
That logic is now rapidly dissolving.
In the world your young adult is entering, the question is no longer what they know, but how they work. Whether they can identify what a problem actually requires, articulate it with enough precision to direct a powerful system, evaluate the output with judgment, and push back when the result is wrong. Whether they can collaborate with AI in a way that sharpens their thinking rather than replacing it.
That’s a different skill set. And it’s not developed by using AI occasionally. It’s developed through volume. Through hundreds of hours of sustained engagement, running complex research threads, stress-testing arguments, iterating through drafts, and asking better questions until the answers improve. Through failing with the tool enough times to develop a nose for when it’s drifting and when it’s sharp.
It is my belief that the professionals who will have an edge in this era won’t be those who learned to use AI, but those who learned to think with it. And that fluency takes practice. Real, sustained, daily practice.
What the $200/Month Actually Buys
Claude Max, Anthropic’s highest-tier individual subscription, provides 20 times the usage of the standard plan. Extended sessions. Complex multi-step problems. The kind of sustained engagement that builds genuine fluency rather than surface familiarity.
Let’s break down and estimate what that access is worth.
A semester of college textbooks runs $500 to $1,000. For static knowledge that doesn’t adapt to the problem in front of your child, delivered once, updated rarely (or never). A month of tutoring in a single subject runs $200 to $400, with fixed hours, limited availability, and the ceiling of one person’s expertise. A professional certification course runs $1,000 to $3,000, is front-loaded, and quickly becomes outdated.
On the other hand, Claude Max is a thinking partner available every hour of every day for whatever problem they’re actually working on. A job application. The analysis their manager asked for. The side project they’re building at midnight. The subject they’re trying to understand before a meeting. It meets them at their current level and moves with them as they grow.
And here is what I believe is the real career argument, the one that almost no one is making clearly enough…
The young professional who spends a year working with AI at depth won’t just be more productive. Instead, they are highly likely to have developed something that won’t appear on a resume but will show up immediately in the quality of their work. They might think more precisely because they’ve had to articulate their thinking precisely. They might evaluate better because they’ve seen, at scale, what strong and weak AI outputs look like. They may have built an intuition for when to delegate and when to think for themselves.
That intuition, I believe, is the edge. Not simply that they’d know how to use Claude, but instead, that they’ve developed the professional judgment to know when to use it, how to use it, and, perhaps more importantly, when not to.
The Thing I Can’t Leave Out
But I wouldn’t be writing honestly if I stopped there. And if you’ve been reading my recent letters, you know I can’t stop there.
In The Age of Cognitive Surrender, I wrote about research from Steven Shaw and Gideon Nave at the Wharton School that should make every parent reassess everything. Across thousands of experimental trials, people who used AI assistance performed worse when the AI led them astray, and felt more confident about their wrong answers regardless. The researchers called it cognitive surrender: the automatic adoption of AI outputs with minimal scrutiny, operating beneath conscious awareness, resistant to incentives and feedback alike.
In short: Surrender didn’t feel like surrender. It felt like insight.
The tool is powerful precisely because it makes hard things feel easy, which means the risk isn’t the laziness you can see, but the dependency you can’t. The young professional who relies on Claude to think won’t feel like they’re offloading their cognition. They will likely feel capable, fast, and on top of things. And if Shaw and Nave are right, some of them will be more confident in their wrong answers than they would have been without the tool at all.
I am not telling you this to talk you out of the Claude subscription, but because the subscription without the conversation is only half of the investment.
A Conversation Worth $200 a Month
Buy the subscription, then sit down with your kid and have the hardest version of the conversation that comes with it. Tell them what the research shows: that the same tool that can accelerate their development will, if used without intention, quietly erode the very capabilities they’re trying to build. That confidence and competence are not the same thing. That the invisible dependency is precisely the kind you don’t feel until it’s already cost you something.
Tell them that the goal is not to use AI more but to remain the kind of person who understands what AI is doing (and why), and whether it should be. And then give them the framework for what that looks like in practice, bringing a real position to the conversation before you ask for help. Interrogate the output rather than adopt it. Notice when you’re generating and when you’re just confirming. Use the tool to think harder, not to stop thinking.
When I founded Kano, I watched something happen in children around the world at the moment they realized they could shape technology rather than just consume it. Something changed in the child. Confidence followed creation, and identity followed agency. That instinct doesn’t disappear when they grow up, but it needs to be deliberately protected.
The subscription is worth $200 a month, but only if what sits alongside it is the commitment to remain the person who understands what’s happening and why.
The Question I Keep Coming Back To
In a world where intelligence is a service, what kind of person do you want your kid to be? The one who rents a mind, or the one who builds their own, and uses a rented mind to build it faster. I think that’s the right framing, but I’m not certain. I find myself returning to it more than almost anything else I’m thinking about right now.
So, buy the subscription and have the conversation. Do both.
With belief,
Yon
This essay is part of the Beyond with Yon series. Previous essays: The Age of Cognitive Surrender | The Age of Agency | The Vanishing Ladder | The Age of Replacement | A Forked Childhood.
My mission with Beyond with Yon is to help solve humanity’s greatest existential challenges and advance the human condition. Connect with me on LinkedIn and X.
Thumbnail credit: Freepik


