Discover how Claude Code is transforming programming with AI agents. Learn insider tips from creator Boris Cherny on building for future models and maximizin...
Claude Code: How AI is Revolutionizing Software Development in 2025
The software engineering landscape is undergoing a seismic shift. What was once considered science fiction—AI writing production code reliably—is now becoming everyday reality for thousands of developers worldwide. In a groundbreaking conversation with Boris Cherny, the creator of Claude Code at Anthropic, we discover how artificial intelligence is fundamentally transforming not just how we build software, but what it means to be a software engineer in 2025.
Key Insights
- Productivity Revolution: Anthropic engineers are experiencing 70-150% productivity gains, with some achieving 1,000x improvements in specific tasks compared to traditional development methods
- Terminal-First Design: Claude Code's simple CLI interface has proven more resilient and user-friendly than anticipated, spawning successful adaptations across web, desktop, and mobile platforms
- Latent Demand Principle: The most successful product features emerge from observing how users naturally want to work, not from top-down planning—plan mode itself emerged from users requesting this capability
- Model-First Architecture: Building for future model capabilities (6 months ahead) rather than current limitations drives sustainable product development in the AI era
- Agent Topologies: Multi-agent systems with uncorrelated context windows are enabling exponentially larger problems to be solved with coordinated AI agents
- The Bitter Lesson: More general models consistently outperform specific, hand-crafted solutions—a principle that should guide all architectural decisions
The Birth of Claude Code: From Accident to Industry Standard
When Boris Cherny first joined Anthropic's Labs team in 2024, he had no mandate to build Claude Code. Instead, he started with a modest goal: understanding how to use the Anthropic API. What emerged was a simple terminal chat application—nothing more than a basic interface to ask questions and receive answers.
The real magic happened when Anthropic released tool-use capabilities. Curious about whether this new feature would prove useful, Cherny began experimenting. He asked Claude to read a file. It worked. Then he pushed further, asking it to retrieve information about the music currently playing on his Mac. Claude did this by writing AppleScript to interact with his computer directly.
"That was my first, I think, ever 'feel the AGI' moment," Cherny recalls. "It was like, 'Oh my God, the model, it just wants to use tools. That's all it wants.'"
This wasn't a carefully orchestrated product launch. It was discovery through experimentation. Cherny built it in a terminal because that was the fastest way to prototype without requiring UI development skills. Two days later, he shared it with his team for feedback. The response astonished him: Robert, an engineer sitting across from him, was already using it to write actual code in this prototype form.
The adoption was viral within Anthropic. When the company's leadership asked if engineers were being forced to use Claude Code, the answer was surprising: no mandate existed. Engineers were simply telling each other about it organically. The internal adoption chart became nearly vertical. In December 2024, when Claude Code launched externally, it wasn't a coordinated marketing effort—it was the culmination of months of organic enthusiasm from Anthropic's engineering team.
Early Use Cases: Learning What Users Actually Need
In 2024, when Claude Code first appeared, the AI model wasn't particularly good at writing complete applications. This limitation, however, became a feature rather than a bug. Engineers discovered that Claude excelled at specific, bounded tasks that involved lower risk.
The earliest use cases were practical and unsexy: automating Git commands, writing unit tests, and executing complex Bash operations. Cherny himself realized he'd forgotten most of his Git commands because Claude Code had been automating them for so long. Engineers working with Kubernetes found the tool invaluable for managing infrastructure.
But the most interesting discovery came from users themselves. Engineers began creating Markdown files with documentation and instructions, then asking Claude Code to read these files before tackling tasks. This user behavior pattern—providing context through documentation—led directly to Claude MD, one of the product's most powerful features.
"Claude MD came from latent demand," Cherny explains. "We saw users doing this thing, so we built product around it." The principle of latent demand would become central to Claude Code's development philosophy: observe what users are trying to do, then make that easier. Don't try to change what they're doing; enhance it.
Claude MD itself is deceptively simple. At Anthropic, their version contains just two lines: one instruction to enable auto-merge for pull requests (eliminating constant context-switching between coding and code review), and another to post PRs to the internal Slack channel for review. The real Claude MD intelligence comes from documentation that the entire team continuously updates. When Cherny spots preventable mistakes in pull requests, he literally tags Claude on the PR requesting that the instruction be added to their documentation.
This creates a self-improving system. The model learns from real mistakes and successes. The documentation grows organically to capture patterns the team discovers. And critically, the team stays lean because everyone contributes to shared knowledge rather than each engineer maintaining separate prompts.
The question of Claude MD size reveals something important about working with AI systems: more instructions don't always mean better results. "If it ever gets too long, my recommendation is to delete it and start fresh," Cherny advises. "Many people tend to over-engineer this. The model's capabilities evolve with each new version, so the goal is to do the minimal possible to keep the model on track."
The Terminal: A Constraint That Became a Strength
Before Claude Code, most AI coding assistants were building complex graphical interfaces. Cherny built in a terminal because it was the path of least resistance—no UI work required for one person working solo. Everyone expected this to be temporary, a stepping stone toward a "real" product.
That's not what happened. The terminal became Claude Code's defining feature.
Designing for terminal constraints required rethinking conventional UI patterns. A terminal offers only an 80-by-100 character grid, 256 colors, one font size, and minimal mouse interaction. Yet these constraints forced elegant solutions. The team initially tried adding mouse support but discovered it felt terrible—virtualized scrolling in a terminal environment, inherited from ANSI escape codes originating in the 1960s, creates a clunky experience.
Instead of fighting these limitations, the team embraced them. This is where Cherny's background in front-end engineering proved invaluable. He built Claude Code using React Terminal, bringing modern design principles to an ancient interface. The result feels modern and intuitive, something difficult to achieve in terminal environments where older tools like Ncurses appear dated and overly complicated.
Small details received obsessive attention. The terminal spinner—that tiny animated element showing activity—underwent 50 to 100 iterations. Eighty percent of those versions never shipped. The team tested, discarded what didn't feel right, and continued iterating. This rapid prototyping capability, where 20 different versions can be built in a couple of hours, enabled the creation of a genuinely joyous product.
"User delight is incredibly important," Cherny emphasizes. "A useful product that people don't love isn't ideal. It needs to be both useful and delightful."
What started as a limitation became an unexpected advantage. The simplicity made Claude Code accessible to engineers who weren't comfortable with Vim or complex terminal tools. The straightforward interface reduced cognitive load, allowing developers to focus on the task at hand rather than tool navigation. And remarkably, the terminal interface has proven portable: Claude Code now exists on web, desktop, iOS, Android, Slack, GitHub, VS Code, and JetBrains IDEs. The core agent remains the same; only the interface changes.
Plan Mode: A Feature Born from User Behavior
Six months into Claude Code's existence, engineers were doing something that caught the attention of the product team. Users would open Claude Code and make requests like: "Come up with an idea, plan this out completely, but don't write any code yet." This pattern was consistent across different conversations. Sometimes users wanted to talk through ideas casually. Other times they requested sophisticated specifications written by Claude before implementation.
The common thread was clear: users wanted to think before coding.
Rather than building an elaborate feature, Cherny took action on a Sunday night at 10 p.m. while reviewing GitHub issues. He examined user feedback on internal Slack channels and identified the pattern. In 30 minutes, he wrote a simple solution: add one sentence to the prompt instructing Claude not to code yet. That became plan mode.
It shipped Monday morning.
Plan mode's simplicity belies its power. By separating planning from execution, users gain confidence that Claude understands the problem before writing code. The feature proves so effective that Cherny estimates 80% of his Claude Code sessions now begin in plan mode. He'll write a plan in one terminal tab, move to another tab to start a second plan, and once he has multiple solid plans mapped out, he'll execute them in parallel.
The trajectory of plan mode hints at something even more significant. As Claude's capabilities have improved, particularly with Opus 4.5 and beyond, the need for babysitting code execution has diminished. Earlier versions would sometimes veer off track even with a good plan. Now, once the plan is solid, Claude stays on track and executes nearly flawlessly almost every time.
"Before, you had to babysit after the plan and before the plan. Now it's just before the plan," Cherny observes. "Maybe the next thing is you just won't have to babysit at all. You can just give a prompt and Claude will figure it out."
In fact, Cherny speculates that plan mode itself has a limited lifespan. As models continue improving exponentially, Claude Code is already learning to enter plan mode automatically when it detects a situation where a human would want planning. The explicit request will eventually become unnecessary.
Productivity Revolution: The Numbers Are Staggering
The productivity gains from Claude Code aren't marginal improvements. They're transformational. Steve Yegge, referencing work at Anthropic, posted that engineers using Claude Code are experiencing 1,000x productivity improvements compared to Google engineers at that company's peak. Cherny finds this figure almost incredible. Just three years ago, the industry discussed 10x engineers as exceptional. Now, claims of 1,000x improvements relative to historical benchmarks seem almost routine.
Within Anthropic, Claude Code adoption is near-universal. Every technical employee uses it daily. Remarkably, even half of the non-technical sales team has found value using it. The company's engineering team doubled in size over the past year while per-engineer productivity increased approximately 70%.
These numbers seem almost unbelievable when compared to traditional software engineering metrics. At Meta, Cherny worked on code quality initiatives. Achieving just a 2% productivity improvement required hundreds of people working full-time for an entire year. At Anthropic, Claude Code delivered a 150% productivity improvement—gains that would have been unthinkable under traditional approaches.
Looking at external data confirms the scale of transformation. Mercury reports that 70% of startups now choose Claude as their AI coding assistant. Semi-Analysis found that 4% of all public code commits now originate from Claude Code. NASA uses Claude for Mars rover path planning. The applications span from practical software development to cutting-edge scientific computation.
What's most remarkable is that these improvements come despite Claude Code remaining far from perfect. When Claude Code first launched externally in February, it wrote only about 10% of code while developers wrote the other 90% manually. Yet even at this limited capability level, the productivity gains were significant. As model capabilities improved, the ratio shifted. Now, with Opus 4.5 and later versions, many Anthropic engineers report that Claude Code writes 70-90% of their code changes. Some have achieved 100%, where Claude handles all code generation while the engineer focuses on specification, review, and direction.
This mirrors the historical arc of other technological transitions. Early automobiles were unreliable and required constant maintenance, yet still offered value over horses. Early electricity was dangerous and frequently failed, yet still proved superior to gas lamps. Claude Code at 10% capability was already useful; at 90% capability, it's transformative.
Building for Tomorrow's Model, Not Today's
One of Cherny's most important insights applies directly to founders building on AI systems: don't build for the model of today. Build for the model six months from now.
This seems counterintuitive. Wouldn't it make sense to maximize current product-market fit before worrying about future capabilities? The answer, Cherny argues, is no. If you optimize entirely for current model limitations, you'll be leapfrogged by competitors who anticipated future improvements. Since models improve every few months, this competitive window is brutally short.
Instead, Cherny advocates exploring the current model's boundaries to understand where it struggles, then building the product for what you believe the model will handle in the near future. This requires both technical understanding and a willingness to be wrong.
"There is no part of Claude Code that was around six months ago," Cherny notes. "You try things, you give it to users, you talk to users, you learn. Eventually, you might end up with a good idea; sometimes you don't."
This philosophy is reflected in the famous "Bitter Lesson" paper by Rich Sutton, which Anthropic has framed and hung on their wall. The core principle: more general models consistently outperform more specific, hand-crafted solutions. Never bet against the model. This means Claude Code constantly makes architectural decisions about whether to invest engineering effort in scaffolding code to extend capabilities by 10-20%, or to wait a few months for the model to improve and handle that functionality natively.
Usually, waiting is the right call. Model improvements are exponential; scaffolding improvements are linear. The shelf life of any specific implementation is measured in months before model improvements render it obsolete. This creates a unique development cycle where code is continuously rewritten, and there's pride in removing features because the model can now handle them without explicit assistance.
Latent Demand: The Principle That Guides Everything
If there's a single idea that Cherny emphasizes repeatedly throughout his career, it's latent demand. He mentions it constantly, acknowledging that this concept wasn't obvious during his earlier startup attempts but has become the lens through which he views all product decisions.
The principle is deceptively simple: people will only do things they're already trying to do. You cannot convince people to do new things. But if people are struggling to accomplish something, you can make that easier, and they'll appreciate it immensely.
Applied to Claude Code, latent demand explains almost every successful feature. Engineers already had Claude open in their browser, already writing specifications and discussing ideas with it. Plan mode didn't create a new behavior; it brought that existing behavior into Claude Code. People already annotated their code with documentation; Claude MD built scaffolding around that pattern.
Cherny even has a practice of walking around the Anthropic floor, standing behind engineers to observe how they use Claude Code. He'll ask, "How are you using this?" This observation method catches patterns that don't surface in formal feedback. The same patterns appear in GitHub issues and internal Slack discussions. Feature ideas emerge from seeing what people are already doing and making it frictionless.
This philosophy explains why Claude Code's feature set looks so different from competitors. Competitors often ask, "What should AI do for developers?" Anthropic asked, "What are developers already doing that we can make easier?" The distinction creates products aligned with actual behavior rather than idealized workflows.
Screening for the Right Mindset in Technical Hiring
The transition to AI-augmented development requires different hiring approaches. Traditional software engineering valued strong opinions from experienced engineers. Senior engineers with comprehensive mental models of system architecture were highly sought. Experience counted heavily.
This hiring philosophy is becoming obsolete. A large body of existing knowledge can become a liability when capabilities change monthly. Engineers who learned specific architectural patterns that worked five years ago might overfit to those approaches even when new methods are superior.
Cherny looks for engineers with a fundamentally different mindset. When hiring, he sometimes asks, "Tell me about a time you were wrong." Interestingly, many senior people excel at this question. They can recognize their mistakes in hindsight and explain what they learned. Other engineers struggle, reluctant to accept responsibility for failures.
"For me personally, I'm probably wrong half the time," Cherny admits. "Half my ideas are bad. You just have to try things, give them to users, talk to users, learn, and eventually, you might end up with a good idea."
This iterative, scientific mindset matters more than raw technical knowledge in the age of rapidly improving models. Anthropic has even begun accepting uploaded Claude Code transcripts as hiring materials. A transcript showing someone working with Claude Code reveals far more than traditional coding interviews: whether they check logs carefully, whether they correct Claude when it goes off track, whether they use plan mode effectively, whether they ensure tests are written, whether they think systematically about problems.
Imagine a spider web graph like in NBA 2K video games, with different axes representing different competencies: systems thinking, testing discipline, design sense, automation capability. A Claude Code transcript provides signals for all of these. This approach captures the full picture of how someone thinks and works.
From Solo Developers to Swarms: The Multi-Agent Future
As Claude Code becomes more capable, new topologies emerge for how multiple agents can work together. The concept of multi-agent systems, where several Claude instances coordinate to solve larger problems, has moved from theoretical to practical.
The key innovation is uncorrelated context windows. When multiple agents work on the same problem, they shouldn't share polluted context from each other. Instead, fresh context windows allow agents to approach different aspects independently, then coordinate results. This approach provides more capability than a single agent with the same total context.
One remarkable example demonstrates the potential: Anthropic's plugins feature was entirely built by a swarm over a weekend. An engineer provided a spec and pointed Claude to an Asana board. Claude created tickets, spawned multiple agent instances, and assigned them to tasks. The main Claude coordinated while independent agents—unaware of each other's work—tackled their assigned pieces. After a few days with minimal human intervention, the plugins feature was complete, roughly in the form it shipped.
This pattern is becoming common. When Cherny encounters difficult debugging problems, he now instructs Claude to spawn sub-agents: one examining logs while another analyzes the code path. They work in parallel, faster than sequential investigation. He's calibrated the approach: easy tasks get one agent, medium difficulty might use three, and truly hard problems get five to ten agents researching different angles.
These multi-agent systems hint at organizational structures of the future. Human project managers might not direct individual contributors; instead, they might configure agent topologies and let AI systems execute work in parallel. The concept of "startup factories" emerges—companies where humans specify the desired outcome and AI systems autonomously deliver results with minimal human code review.
The Terminal's Unexpected Resilience and Future Evolution
When Claude Code first launched from a terminal, Cherny expected it to have a three-month lifespan before better interfaces supplanted it. He was wrong. The terminal hasn't just survived; it's become the core of a multi-platform ecosystem.
Claude Code now exists on web browsers, desktop applications (with a dedicated code tab), iOS and Android apps (also with code tabs), Slack, GitHub, and as extensions in VS Code and JetBrains IDEs. Yet the underlying agent remains unchanged. The terminal experience simply proved so effective that it became a template for adaptations rather than something to escape.
This phenomenon reflects something important about design: constraints breed elegance. The terminal's limits forced simplicity. Small screens, limited colors, and character-based output eliminated unnecessary complexity. Engineers didn't need to learn UI paradigms; they simply asked for what they wanted.
Whether the terminal's long-term prominence continues depends on what comes next. With multi-agent systems enabling increasingly complex work, traditional interfaces might become necessary. But Cherny remains humble about predictions. "I've been wrong so far about the lifespan of the CLI, so I'm probably not the person to forecast," he admits.
Advice for Founders Building on LLMs Today
For founders building developer tools or any product leveraging AI, Cherny offers concrete guidance grounded in Claude Code's experience:
Think about what the model wants to do. The model doesn't want to be trapped in constraints. It wants to interact with the world. If you're building a developer tool, observe what the model naturally tries to do and enable that. Don't restrict it to narrow interactions; let it explore.
Solve real problems for both humans and AI. The best solutions serve latent demand from both users and the model. What problem do you want to solve for humans? Then, when you apply the model to that problem, what is it trying to do? The technical solution should satisfy both.
Don't fight model evolution; accommodate it. Build the product for tomorrow's model, not today's. This requires genuine uncertainty tolerance. You'll be wrong sometimes. That's expected. The question is whether you're making bets on the right direction.
Iterate rapidly with users. The most important product decisions come from observation. Watch users. Talk to them. Walk around your office. Notice patterns in issue trackers and feedback channels. When you spot a clear pattern in behavior, build products around it.
Stay humble about what you know. The smartest approach is thinking scientifically and from first principles rather than relying on strong opinions. Model capabilities change monthly. What worked optimally six months ago might be suboptimal today. Keeping an open mind, testing assumptions, and being willing to be wrong enables better decisions.
The Path Forward: What Comes After Plan Mode
Cherny's most alpha insight might be this: plan mode itself has a limited lifespan. As models continue improving exponentially, Claude is learning to recognize situations where it should plan before executing. The explicit user command will eventually become unnecessary.
What comes after? Probably direct user engagement. Anthropic is already experimenting with having Claude interact directly with users on Slack, respond to mentions on social media, and even tweet (though Cherny deletes most tweets because the tone feels off to him).
A common pattern has emerged: Claude Code will examine a codebase, read blame information, and proactively message relevant engineers with clarifying questions. Once answered, it continues working. This creates an entirely different interaction model where the AI acts as a peer developer, actively seeking information it needs rather than waiting passively for user instructions.
The implications are profound. Software development might transition from an activity engineers do independently, using tools for assistance, to an activity where engineers and AI collaborate as genuine peers, each bringing different strengths. Engineers provide judgment, specification, and taste. AI provides execution, exploration, and systematic work.
If this pattern continues, job titles might evolve. "Software engineer" could become "Builder" or "Product Manager"—roles focused on specification, user communication, and direction rather than typing code. The activity of coding itself becomes increasingly abstracted, something the AI handles while humans focus on higher-level problems.
Conclusion: Building for a Transformed Future
Claude Code represents more than a tool for writing code faster. It's a case study in how products emerge from observing latent demand, how constraints breed elegance, and how building for the model of tomorrow, not today, determines competitive success.
Boris Cherny and the team at Anthropic have created something that genuinely delights users while delivering unprecedented productivity gains. But what excites Cherny most isn't Claude Code itself—it's what comes next.
Every few months, the underlying model improves. Every improvement enables new possibilities. Features that seemed essential become redundant. Interfaces that seemed permanent become transitional. The only certainty is change.
For founders building on AI today, the lesson is clear: observe what people are already trying to do, anticipate what models will be capable of in six months, and build with humility about how quickly you'll need to adapt. The future moves faster than anyone predicts. The only sustainable strategy is building for change itself.
Original source: Inside Claude Code With Its Creator Boris Cherny
powered by osmu.app