Discover how Claude Code is reshaping software development. Learn why 4% of GitHub commits are AI-generated and what comes next after coding is solved.
Claude Code: How AI Will Transform Software Engineering Forever
Key Takeaways
- 100% of code is now being written by AI at Anthropic, with productivity per engineer increasing by 200%
- 4% of all public GitHub commits are currently authored by Claude Code, with predictions reaching 20%+ by end of 2026
- The next frontier moves beyond coding toward AI systems that come up with ideas, review feedback, and plan what to ship
- Latent demand is the most important product principle for understanding how users will misuse tools to solve real problems
- Building for the model six months in the future—not today—is the key to hitting product-market fit when capability arrives
The Seismic Shift in Software Development
The software engineering industry is experiencing an unprecedented transformation. What once seemed like science fiction—AI writing 100% of production code—has become reality in just one year. Boris Cherny, Head of Claude Code at Anthropic, revealed that his entire codebase is now generated by Claude Code. He hasn't edited a single line of code by hand since November. Every single day, he ships between 10 and 30 pull requests, all written entirely by artificial intelligence.
This isn't merely an incremental improvement in developer productivity. This represents a fundamental restructuring of how software gets built. When engineers previously claimed that AI would eventually write all code, skeptics dismissed the idea as unrealistic hype. Today, that prediction has manifested into observable reality across major technology companies. Spotify publicly announced that their best developers haven't written code since December. Semianalysis reported that approximately 4% of all public GitHub commits are now authored by Claude Code—a staggering figure that still represents only the beginning of a much larger transformation.
The trajectory is accelerating exponentially. Claude Code's growth metrics don't just show improvement; they show improvement accelerating faster and faster. In the past month alone, weekly active users doubled. This exponential growth pattern reflects what Anthropic's founders understood deeply when they published the original scaling laws papers: improvements in AI capability follow exponential curves, not linear ones. When Boris traced the exponential curve of code written by Claude at his company in May 2024, the math pointed to crossing 100% completion by year's end—a prediction that initially made the room "audibly gasp" in disbelief. Yet it happened exactly as projected.
From Terminal Hack to Industry-Reshaping Platform
The origin story of Claude Code reveals how transformative products often emerge from experimental play rather than rigid roadmaps. When Boris first joined Anthropic, he spent one month building "weird prototypes"—most never shipped. The second month involved studying post-training research to understand how models actually work at a foundational level. This unconventional apprenticeship proved essential. Boris discovered that to do excellent work in AI, you must understand the layer beneath where you operate. Traditional engineers study infrastructure and runtimes; AI engineers must understand the models themselves.
What emerged was Claude CLI, later renamed Claude Code. The first prototype demonstrated something surprising: when given a bash tool and asked a question about music currently playing, the model independently discovered how to use that tool to answer the question. Nobody had explicitly instructed it to use the tool for that purpose. The model figured out the connection and executed it. This moment crystallized what Boris was building toward—not a system that followed predetermined workflows, but one where the model reasoned about available tools and autonomously decided when and how to deploy them.
Initial reception was modest. Boris announced it internally and received exactly two likes. Most people couldn't imagine a terminal-based coding interface as a serious product. They expected sophisticated IDEs and polished environments. But Boris had built it in a terminal simply because he was the only person working on it, and the terminal represented the fastest way to iterate while the underlying model improved rapidly. This constraint became a feature: as the model improved explosively month after month, the terminal's simplicity meant it could adapt without redesigning complicated interfaces.
The turning point came when Ben Mann encouraged Boris to create a daily active users chart despite the early stage. That chart went vertical immediately. By February, Claude Code launched externally. Interestingly, the external launch didn't produce instant viral success. It took months for the general development community to understand what this tool actually was. The surprise was that something so alien and unfamiliar to traditional engineering workflows could be genuinely useful. As the product became available across iOS, Android, desktop apps, Slack integrations, and GitHub extensions, more developers encountered it in familiar environments. But that accessibility advantage came only after the terminal version had proven the concept.
The Latent Demand Principle
One of the most valuable insights Boris shared concerns latent demand—arguably the single most important principle for modern product building. This concept explains why certain products succeed wildly while others languish despite superior functionality.
Latent demand occurs when users discover ways to misuse existing tools to solve problems the original creators never anticipated. When developers face genuine needs but no purpose-built solution exists, they'll jump through hoops to adapt available tools. This behavior signals something crucial: if you build a dedicated product for that purpose, users will welcome it enthusiastically.
Facebook's evolution perfectly demonstrates this principle. In 2016, Fiona, the founding product manager, noticed that 40% of all posts in Facebook groups involved buying and selling items. Users had essentially hijacked Facebook Groups, repurposing them as a marketplace despite the platform's original design. This wasn't malicious misuse—it reflected genuine demand. Rather than fighting this pattern, Facebook built Facebook Marketplace as a dedicated product. The result: one of their most successful initiatives.
Similarly, analysis revealed that 60% of profile views on Facebook involved non-friend users of opposite genders. Users had invented informal "creeping" behavior—again, a signal of latent demand. Facebook Dating emerged as the logical response to this observed pattern.
Claude Code's expansion beyond software development follows this same logic. For six months, the team observed users employing Claude Code in the terminal for tasks completely unrelated to engineering. Data scientists used it for SQL analysis despite having to download Node.js and install CLI tools—substantial barriers for non-engineers. Other users leveraged it to grow tomato plants using AI analysis, analyze their genomes, examine MRI scans, and recover photos from corrupted hard drives. These weren't edge cases; they represented systematic evidence that people urgently needed an AI agent for general-purpose tasks but were willing to endure terrible user experiences to access it.
The traditional product approach meant identifying this behavior and making it easier. The modern approach, especially with large language models, means understanding what the model itself wants to accomplish, then removing obstacles. Rather than boxing the model into narrow workflows—"do step one, then step two, then step three"—the product philosophy inverted: make the model itself the product. Provide minimal scaffolding, give it core tools, and let it decide which tools to use and in what sequence.
This philosophy birthed Claude Opus (Co-work). Built in just ten days by an exceptionally strong team, Co-work launched immediately upon release—dramatically faster adoption than Claude Code's gradual rise. The team released it deliberately early, acknowledging it was "pretty rough around the edges," because this approach offers unique advantages. In the AI era, releasing products early reveals user patterns you couldn't predict. It also generates invaluable safety data about how models behave when autonomous agents interact with real user systems.
What Happens After Coding Is Solved
The most provocative aspect of Boris's vision concerns what comes next. If coding is effectively solved—a claim supported by the empirical data—the frontier shifts elsewhere. The next evolution involves Claude systems that don't just write code but actively come up with ideas, review user feedback, examine bug reports and telemetry data, and autonomously propose what to ship next. In other words, Claude is transitioning from tool to coworker.
This shift terrifies many product managers. If AI systems start deciding what to build, what becomes of product management roles? Boris's answer is nuanced: coding is solved, but that's only one problem. The field will expand to adjacent work—the general tasks that accompany software development. He uses Co-work every day for work unrelated to engineering: filing parking tickets, managing all project management through spreadsheet synchronization, sending messages across Slack and email, coordinating team communication.
But there's a more profound insight embedded here. The question of whether people should learn to code becomes increasingly irrelevant. In one or two years, coding knowledge won't substantially matter. Yet the broader principles of software development—understanding systems, thinking about problems, considering implications—remain valuable. More importantly, curiosity, creativity, and the ability to envision solutions become more crucial when the mechanical details of implementation disappear.
Boris draws a powerful historical parallel: the printing press and literacy. In mid-1400s Europe, literacy was below 1% of the population. Scribes performed all writing and reading on behalf of lords and kings, many of whom were themselves illiterate. Then Gutenberg invented the printing press. Within 50 years, more printed material existed than in all the previous thousand years combined. Printing costs dropped roughly 100x over the next half-century.
But literacy didn't immediately follow. Learning to read and write requires education systems, free time, and freedom from subsistence labor. Over the subsequent 200 years, global literacy rose to approximately 70%. The scribe profession didn't vanish; it transformed. One documented interview with a 1400s scribe revealed he was excited about the printing press because it freed him from tedious copying. The parts he actually enjoyed—artistic illumination and bookbinding—could now occupy his time fully.
Boris sees a parallel in engineering's transformation. The tedious parts—managing dependencies, wrestling with tools, struggling through minutiae—were never the satisfying aspects. The genuinely rewarding work involves figuring out what to build, having ideas, talking with users, envisioning systems, and collaborating with teammates. As Claude handles the mechanicals, engineers increasingly focus on the creative, strategic, and human-centered work. The software engineer title itself may disappear, replaced by "builder" or simply acknowledging that everyone codes while maintaining specialized roles based on interests and strengths.
Safety, Alignment, and the Development Philosophy
Running concurrent with Claude Code's explosive growth is an equally important thread: safety and alignment. At Anthropic, this isn't a peripheral concern; it's foundational. The entire company exists to ensure AI development proceeds safely. Everyone working there, regardless of function, came specifically because of that mission.
Anthropic approaches safety through three distinct but interconnected layers. The lowest layer involves alignment research and mechanistic interpretability—understanding what happens inside the model at the neuron level. Researchers like Chris Olah have pioneered this field, enabling scientists to trace how concepts become encoded, how planning mechanisms work, and how the model thinks ahead. This isn't mere theoretical curiosity; it allows intervention. If a neuron related to deception activates, researchers can now monitor and understand that behavior.
The second layer comprises evaluations—laboratory conditions where the model faces synthetic scenarios designed to test whether it behaves correctly and remains aligned. It's like studying an organism in a petri dish: controlled, observable, safe.
The third layer—the one most people overlook—is observing the model in the wild. As models become more sophisticated and autonomous, behavior in laboratory conditions becomes an unreliable predictor of real-world performance. A model might pass rigorous evaluations while behaving problematically when deployed. This gap necessitates early release and real-world monitoring.
Claude Code was released internally months before external launch because Anthropic needed to study whether their first major deployed agent was actually safe. They had never before released a broadly-used coding agent. The uncertainty was substantial. Only after months of internal observation and iterative improvements did they feel confident enough to release externally.
Co-work followed a similar pattern. Despite looking good on alignment metrics and passing evaluations, it's fundamentally different from Claude Code. Here, an agent acts on behalf of users, accessing Gmail, Slack, and other systems. The safety considerations multiply. Co-work launched as a research preview precisely because studying real-world behavior remains essential. The iterative improvements continue through every release.
This philosophy represents what Anthropic internally calls "the race to the top." Rather than hoarding safety advances, Anthropic open-sources tools and publishes research freely. For Claude Code, they released an open-source sandbox enabling agents to run within specified boundaries, preventing unconstrained system access. Notably, the sandbox works with any agent, not exclusively Claude Code, because Anthropic believes competitive pressure toward better safety benefits everyone.
Practical Principles for Building with AI
Several specific principles emerge from Boris's year of building Claude Code. These aren't abstract philosophy; they're hard-won lessons from shipping products at speed while maintaining quality.
First: Don't box the model in. Many teams instinctively constrain models into narrow roles, treating them as components in larger systems with rigid step-by-step workflows. This instinct is understandable but wrong. Models consistently outperform when given tools and goals, then permitted to reason about how to achieve those goals. A year ago, extensive scaffolding was necessary because models couldn't sustain long chains of reasoning. Today's models—particularly Opus 4.6—eliminate that need. The principle holds: provide tools, define the goal, trust the model's reasoning.
Second: Embrace "The Bitter Lesson." Rich Sutton articulated this concept roughly ten years ago: general models always outperform specific ones in the long run. Applied to AI products, this means resisting the temptation to fine-tune extensively, build custom models for specific tasks, or over-optimize architecture. These approaches might yield 10-20% improvements, but they're consistently wiped out when the next-generation, more general model arrives. Better to bet on continued model improvement than to build increasingly specific systems.
Third: Build for the model that will exist six months hence, not today's model. This creates discomfort. Your product will feel incomplete in the present, lacking product-market fit. But when the predicted capability arrives, you'll be ready. Claude Code won't really started accelerating until Opus 4 arrived, at which point growth became exponential. Had Boris optimized for Sonnet 3.5, the product would have been fine-tuned for a model that would soon become obsolete.
Fourth: Underfund projects slightly. Constraints breed creativity and speed. When Boris was the sole person working on Claude Code, speed was the only advantage. Now, the principle persists: give great engineers fewer resources than they request, empower them to move quickly, and they'll innovate in ways that fully-resourced teams never achieve. Combined with access to abundant tokens, underfunding forces prioritization and creative efficiency.
Fifth: Move fast, relentlessly. The pace of model improvement is breathtaking. If you can ship something today, ship it today. Waiting introduces unnecessary risk that a better approach will emerge, or that competing teams will move faster. This applies across the stack: release early, learn from users, iterate. The product will be rough. Users don't mind rough if it solves their problems faster than alternatives.
Practical Tips for Using Claude Code
For engineers new to Claude Code or seeking to improve their proficiency, Boris offered specific recommendations.
Use the most capable model. Currently, that's Opus 4.6. Counter-intuitively, the less powerful models (Sonnet, Haiku) often prove more expensive because they require more tokens to accomplish the same task. They need more correction, more hand-holding, more iterations. The most capable model solves problems faster with fewer corrections, resulting in lower total token usage and better results.
Leverage plan mode. The implementation is deceptively simple: inject one sentence into the model's system prompt: "Please don't write any code yet." That's literally all plan mode does. From the terminal, shift-tab twice activates it. For desktop and web, buttons are available. The model articulates its approach before executing. Once the plan looks good, enable auto-accept edits. With Opus 4.6, one-shot execution after a solid plan succeeds nearly every time.
Explore different interfaces. While Claude Code originated in the terminal, supporting macOS, Windows, and various terminal emulators perfectly, that's not the only way. Interfaces span iOS and Android apps, desktop applications, Slack integrations, GitHub integrations, and web access. Different users prefer different environments. The same Claude agent and code generation capabilities run everywhere. Find the interface that matches your workflow.
Keep multiple agents running. One of the capabilities that surprised Boris is the ability to maintain multiple concurrent Claude Code sessions. While one agent handles a task, another can work on something else. In his current workflow, he might start a coding task, shift to project management with Co-work, launch another code session, and maintain parallel execution. Agents can now run for extended periods—from tens of minutes to hours or even days—without degradation, eliminating the constant hand-holding required a year ago.
The Broader Implications for Technical Careers
As a thought leader in one of the most disruptive technological shifts in software engineering, Boris doesn't shy away from the uncomfortable implications. The profession is transforming fundamentally. Several concerns deserve explicit acknowledgment.
Skills atrophy: Will engineers stop understanding foundational concepts if they never write code? Boris's perspective: this concern is overstated but not invalid. Programming exists on a continuum. Sixty years of computing involved similar transitions—from punch cards to assembly to compiled languages to virtual machines. Each transition prompted similar anxiety. Moreover, understanding the layer beneath your work remains valuable for good engineering, at least for the next year or two. But eventually, it becomes irrelevant, like assembly code running invisibly under high-level languages. Most engineers will adapt, though adjustment varies by individual and background.
Job displacement: The numbers are undeniable. Claude Code is responsible for a non-trivial fraction of all code commits globally. As this percentage climbs toward 20%+ by end of 2026, what becomes of software engineers? Boris's answer: titles and roles will shift, but demand for builders persists. The jobs that remain rewarding involve strategy, user understanding, system design, and coming up with what to build. These activities scale with the ability to execute. If one engineer can accomplish what ten previously could, the value of that engineer in roles involving judgment and decision-making increases dramatically.
Career transitions: Some roles will vanish. Positions focused exclusively on code-writing mechanics become obsolete. Junior engineers historically learned by writing code; if Claude Code writes the code, how does learning happen? This is a genuine challenge without obvious answers. Mentorship models will evolve. More junior engineers may move directly into product-facing roles, learning through conversation with users and strategic discussion with mentors rather than through isolated coding work.
Designing for the future: Those who will thrive professionally over the next few years share common traits: curiosity about what users actually need, comfort operating across technical and non-technical domains, tendency to ask good questions rather than defend predetermined answers, and genuine desire to build things that matter. The title—whether "engineer," "product manager," "builder," or something else entirely—matters less than the mindset.
Evidence From the Field
The empirical evidence supporting this transformation is mounting. Spotify announced that their best developers haven't written code since December. Multiple major technology companies report similar patterns. On a personal level, Boris represents only one data point, but a significant one: a prolific engineer who previously ranked among the most productive at Meta is now shipping 10-30 pull requests daily, all generated by Claude Code, without manual editing. His productivity, measured in pull requests, has roughly doubled despite not personally writing any code.
Anthropic's broader metrics are equally striking. Over the year since Claude Code's introduction, the company roughly quadrupled its engineering headcount. Yet productivity per engineer increased by 200% in terms of pull requests. At Meta—where Boris previously oversaw code quality across Facebook, Instagram, and WhatsApp—achieving even single-digit percentage productivity improvements across hundreds of engineers represented significant success. These 200% gains are categorically unprecedented.
An informal Twitter poll Boris conducted revealed that roughly 70% of engineers and product managers report enjoying their work more with AI assistance. Roughly 10% report enjoying it less. Interestingly, designers showed a different pattern: 55% enjoying work more, 20% enjoying it less. This variation suggests that AI's impact differs across disciplines. At Anthropic, designers typically code, and many report that having Claude Code unblock them from technical obstacles actually enhances enjoyment. The pattern would likely differ at organizations where design roles are more narrowly scoped.
The Historical Moment and Long-Term Vision
To contextualize the current moment, Boris offered a perspective grounded in history. The printing press serves as the most useful historical analog. When Gutenberg's printing press emerged in the mid-1400s, literacy in Europe sat below 1%. All writing and reading was performed by a tiny professional class—scribes employed by powerful institutions, many of whom didn't personally read. The printing press precipitated explosive growth in printed material; more books appeared in 50 years than in the thousand years prior. Printing costs dropped 100x.
Literacy didn't immediately follow. That took 200 years, requiring development of education systems, universal literacy programs, and economic conditions freeing people from subsistence labor. But the transformation was inevitable and total.
A surprising historical detail adds nuance: one documented scribe, interviewed about the printing press, expressed excitement. The aspects he disliked—copying text between books, the sheer mechanical repetition—were now handled by machines. The work he actually enjoyed—artistic illumination, decorative elements, bookbinding—could now occupy his full attention. His expertise remained valuable; its application simply shifted.
Boris sees this pattern repeating with software engineering. The tedious mechanical work—wrestling with dependencies, managing build systems, searching Stack Overflow, debugging trivial issues, writing boilerplate—was never the satisfying part for most engineers. The genuinely rewarding aspects involve understanding problems deeply, envisioning solutions, collaborating with others, and discussing the implications of technical decisions. As Claude handles the mechanics, engineers shift toward work that's fundamentally more human.
Post-AGI, if that concept means anything, Boris envisions a different life. Before joining Anthropic, he lived in rural Japan, the only engineer and English speaker in a small town. He biked to farmers markets, organized his time around seasons, and made miso. That pace fascinated him. Miso production teaches you to think in genuinely long time scales: white miso requires three months; red miso requires two to four years. You mix it, then wait patiently. This contrasts sharply with the engineering mindset of shipping constantly, optimizing for quarterly metrics, and moving faster.
Post-AGI or outside Anthropic's mission, Boris says he'd probably return to that life and deepen his miso practice. The honest answer reveals something important: for those driving the transformation, the work itself—building toward safe artificial general intelligence—matters profoundly. Without that mission, the appeal diminishes significantly.
Conclusion
Claude Code's first year represents a genuine inflection point in software development. The shift from "AI assists humans writing code" to "AI writes code while humans direct strategy" happened faster than nearly anyone predicted. The exponential growth curves that Anthropic's founders understood theoretically have become tangible reality affecting millions of engineers' daily work.
The implications extend far beyond productivity metrics. They encompass how we structure technical careers, what skills remain valuable, how we develop junior talent, and what "software engineer" even means. These questions don't have clean answers yet. But the direction is clear: coding as a bottleneck is being eliminated, revealing what actually matters in building excellent software.
For anyone working in technology, the moment demands attention. The tools are here. The adoption is accelerating. The question isn't whether this transformation will happen—it's happening now. The question is what you'll build and contribute when the mechanical barriers to execution largely disappear.
Original source: Head of Claude Code: What happens after coding is solved | Boris Cherny
powered by osmu.app