Discover how Anthropic scaled from $1B to $19B ARR in 14 months. Learn Amol Avasare's growth strategies, AI automation, and what it takes to lead growth at h...
Claude's $1B to $19B Growth: How Anthropic Became AI's Fastest-Growing Company
Key Insights
- Explosive Growth: Anthropic scaled from $1 billion to $19 billion in annual recurring revenue in just 14 months, making it one of the fastest-growing companies in history
- Growth Automation: The company leverages Claude through "CASH" (Claude Accelerates Sustainable Hyper-growth) to automate growth experimentation, delivering impressive results with minimal human oversight
- Focus on Activation: Rather than relying solely on product excellence, Anthropic's growth team strategically addresses activation challenges through intelligent onboarding and user education
- Mission-Driven Success: Safety and quality aren't barriers to growth—they're competitive advantages that drive long-term success and user trust
- Exponential Thinking: Operating at exponential scales requires fundamentally different strategies; micro-optimizations pale compared to bets that unlock new markets worth 100-1000x current value
How Anthropic Went From Underdog to Growth Juggernaut
When Amol Avasare joined Anthropic as Head of Growth, the company was already impressive but faced a seemingly impossible challenge: maintain hypergrowth at an increasingly massive scale. The numbers tell an extraordinary story. In 2023, Anthropic grew from zero to $100 million in revenue. In 2024, they scaled from $100 million to $1 billion. By February 2025, they had reached approximately $19 billion in annual recurring revenue.
For context, Amol describes Anthropic's early position as genuinely precarious. Unlike giants like OpenAI, Google, or Meta, Anthropic lacked established cash flow, extensive distribution networks, or first-mover advantage. They were the "smallest and least well-funded player in the space." Yet through disciplined focus, exceptional talent, and innovative growth strategies, they've become what many consider the fastest-growing AI company in history.
What makes this achievement remarkable isn't just the velocity—it's that Anthropic maintained exponential growth while navigating the unique challenges of scaling a frontier AI product. Each quarter brought new challenges: as revenue doubled and tripled, infrastructure broke, activation funnels needed reimagining, and the entire organization had to constantly reinvent itself.
Amol's journey to Anthropic itself reveals something important about how the best talent finds its way to transformative companies. He cold-emailed Mike Krieger, Anthropic's Chief Product Officer and Instagram co-founder, with a simple proposition: "You have an incredible product, but you're missing a dedicated growth team." At the time, Anthropic wasn't even actively hiring for the role. Yet Krieger was impressed enough to hire Amol based on that single email. As Krieger later told him, "You're the only PM I've ever hired via cold email."
This cold email success wasn't accidental. Amol credits his background as a founder for teaching him the art of crafting subject lines that get opened, messages that resonate, and the persistence to follow up multiple times. He discovered that most people only send one cold email and give up. His philosophy: keep reaching out until someone explicitly asks you to stop. That persistence, combined with strategic thinking about where to find people and how to craft messages, explains his success.
The growth story at Anthropic also reveals something counterintuitive: having an exceptional product doesn't mean you can ignore growth discipline. In fact, Amol would argue the opposite. When a product is genuinely revolutionary—like Claude—growth teams must become more strategic, not less. They must ask difficult questions about activation, user education, monetization, and how to help users understand what's actually possible with these new capabilities.
Understanding "Success Disasters" and Growth at Scale
One of Amol's most revealing insights is his breakdown of how he spends his time: roughly 70% on what the team internally calls "success disasters," and 30% on traditional growth work. Success disasters sound counterintuitive, but they're exactly what they sound like—problems caused by things going so well that other systems break.
Consider Facebook's early scaling challenges, Uber's infrastructure problems, or DoorDash's logistics nightmare. Rapid growth creates cascading failures. At Anthropic, this might mean: the sign-up funnel suddenly experiences 10x traffic and converts terribly under load; customer support gets overwhelmed; payment processing fails; the onboarding experience that worked for thousands now fails for millions. Every system designed for one scale breaks at the next scale.
Amol spends enormous time firefighting these critical issues. When you look at the charts—all showing hockey-stick growth curves, all green, all pointing up and to the right—the underlying reality might be chaos. A feature that worked brilliantly for early users creates confusion for the mainstream. A pricing model optimized for one cohort alienates the next. Infrastructure that scaled to millions can't handle tens of millions.
The remaining 30% of his time goes to what he calls "bread-and-butter growth work": deciding which products to prioritize, thinking about long-term pricing strategy, optimizing core funnels for new products like Claude Code and Co-work, and planning for the next wave of AI capabilities that will inevitably shift everything.
This distinction matters because it explains why traditional growth frameworks often fail at companies like Anthropic. A growth team at a mature company might run 200 small experiments per quarter, optimizing conversion rates by fractions of percentages, and claim victory. That approach is mathematically impossible at Anthropic's scale. A 1% conversion improvement on billions of potential interactions is economically massive, yes—but it's dwarfed by the impact of releasing a new product, opening a new market, or enabling entirely new use cases through research breakthroughs.
Activation: The Hidden Lever for AI Product Growth
One of Amol's biggest contributions to Anthropic's growth has been his relentless focus on activation—specifically, helping new users understand what Claude can actually do for them. This seems obvious in hindsight, but it's surprisingly overlooked in many AI products.
The activation challenge in AI is fundamentally different from traditional SaaS. When you signed up for Slack in 2013, you understood what it did: it was a communication tool. Onboarding could focus on helping you set up channels, import your email, and send your first message. But when someone signs up for Claude, what should they actually do? Ask about the weather? Write code? Brainstorm a novel? Generate images? The model is so capable that new users often have no idea where to start.
Amol's team addressed this by implementing an intelligent onboarding flow. Instead of assuming all users are the same, they ask questions to understand who the user is and what they care about. A software engineer gets guided toward Claude Code and coding use cases. A writer gets guided toward content generation. A student gets different recommendations than a researcher.
Some might view this as adding friction—making the onboarding process longer and more complex. But Amol learned this principle during his time at Mercury, the banking platform. When Mercury invested in improving onboarding quality rather than chasing vanity metrics, they discovered something counterintuitive: more friction, done right, drives better outcomes.
At Mercury, they broke down complex input forms into multiple, smaller screens. This reduced cognitive load and significantly improved completion rates. At MasterClass, they use quizzes to personalize the experience. At Calm, a quiz appears before you can access their services. The pattern is consistent: when friction helps users understand why a product is for them, it dramatically improves activation and long-term retention.
Another clever move Anthropic made was enabling users to import their chat history from ChatGPT. This directly addressed the "cold start problem" for users switching from OpenAI's product. Suddenly, new Claude users didn't start from zero—they had context, history, and a natural transition path.
The deeper insight here is about data leverage. When Anthropic understands who their users are through onboarding questions, they gain valuable information for lifecycle marketing. Even if a user initially churns, the team now knows what use cases they care about, what segments they fall into, and how to potentially re-engage them with relevant features. That understanding compounds over time.
Building a Growth Organization at Hypergrowth Scale
Anthropic's growth team has expanded to roughly 40 people, structured in a way that balances two competing needs: maintaining focus on specific problems while thinking systematically about growth across all products.
The structure includes both "horizontals" and "verticals." Horizontals like Growth Platform and Monetization think about growth mechanisms across the entire product suite. Verticals focus on specific audiences: B2B growth, Pro Code growth, Knowledge Worker growth, API growth. This structure exists because Anthropic has multiple products with very different audiences and different adoption curves. Claude for consumers looks nothing like Claude Code for developers or Claude for Sheets for financial professionals.
Within each team, there are engineers, designers, product managers, and data specialists—the typical modern growth organization. But Amol notes one significant difference: Anthropic indexes much more heavily toward larger bets than smaller optimizations. At most growth teams, the split might be 70% small-to-medium experiments and 20-30% larger bets. At Anthropic, it's flipped: 50-70% of effort goes to large bets, with only 20-30% to incremental optimizations.
This makes perfect sense given the context. A 1% conversion improvement on a mature product might be worth millions. But at Anthropic, the future product value could be 100 to 1000 times greater than today's value. The growth team is operating in a world where agentic coding—a category that barely existed 18 months ago—is now larger than the entire previous market for AI-assisted coding. New markets are constantly being unlocked.
Therefore, missing big opportunities for the sake of chasing small optimizations would be strategically foolish. The team is focused on questions like: "What happens when Claude can do 10x more than it does today? What new use cases become viable? How do we prepare for that?" rather than "Can we increase sign-up conversion by 0.5%?"
This organizational structure also reflects a reality about how growth works in a multi-product company: you need product-minded engineers who can think beyond their narrow domain. Anthropic actively hires for this. If a project requires two weeks or less of engineering time, the engineer is effectively on the hook to act as the PM for that project—talking to security, legal, and cross-functional teams, driving the whole effort. The dedicated PM is looped in for advice if needed, but otherwise stays out of the way.
This approach works because the company has deliberately built a culture of ownership and excellence. But it also means that as AI capability improves, the nature of PM and growth roles is shifting. Engineers are becoming more powerful, which is creating pressure on PMs and designers. Rather than hiring fewer people, the company is hiring more PMs—because the leverage gap between what engineers can do with AI assistance and what PMs can do is growing, and the bottleneck has shifted to thinking, strategy, and cross-functional coordination.
Automating Growth Experimentation With CASH
Perhaps the most forward-looking initiative at Anthropic is the automation of growth itself through what the team calls "CASH": Claude Accelerates Sustainable Hyper-growth. This is growth experimentation powered by AI.
Here's how it works: The growth platform team uses Claude to identify growth opportunities based on historical data and current trends. Claude then proposes specific experiments—copy changes, UI tweaks, feature flag combinations. Humans review these proposals to ensure they align with brand values and safety considerations. If approved, the experiments are built and shipped. Results are measured, and Claude learns from the outcomes, proposing better experiments the next iteration.
This is still early—the initiative started just a couple of months ago—but it's already showing impressive results. The quality of ideas isn't yet at senior PM level, but it's comparable to what you'd expect from a junior PM with two to three years of experience. And the exponential improvement curve is steep. Each new version of Claude (Opus 4.5, Opus 4.6) unlocks capabilities that weren't possible before.
Amol is careful to note that human review won't disappear anytime soon. Cross-functional stakeholder alignment—getting legal, design, product, and other teams on the same page—remains fundamentally a human problem. You could theoretically have an AI agent convince everyone to align around a decision, but that's a different problem from generating ideas.
However, for the technical execution side—identifying opportunities, building experiments, testing, and iterating—automation is becoming feasible. This represents a significant shift in how growth teams will operate. Rather than product managers spending 40% of their time running small experiments, they might spend 5% approving and reviewing AI-generated experiments, then 95% thinking about strategy, understanding users, and making bigger bets.
The key insight is that AI is moving from "do what I tell you" to "figure out what to do." Growth teams are experiencing this transformation earlier than other functions because growth is fundamentally data-driven with clear feedback loops. The model can learn what works and what doesn't. Smaller PM tasks have clear success metrics. Over time, as AI capabilities improve, this automation will spread to larger and larger problems.
The Future of Product Management, Engineering, and Design in an AI World
Amol's perspective on how these roles are evolving deserves serious consideration. Currently, engineers are experiencing the most leverage from AI tools like Claude Code. A team of five engineers might become two to three times more productive. Meanwhile, designers and product managers have seen productivity gains, but not at the same scale—yet. This creates an asymmetry: engineers are getting 3x more powerful, but the number of PMs and designers hasn't increased proportionally, which means they're now managing larger, more complex organizations with the same number of people.
This puts PMs and designers in a genuinely difficult position. They're stretched thin. The solution Anthropic is exploring is multi-pronged:
First, hire more product-minded PMs. This seems obvious, but many companies underhire because they think their existing PMs can handle more. Anthropic is aggressively hiring because they recognize the bottleneck.
Second, empower product-minded engineers to act as mini-PMs on smaller projects. This requires a specific type of engineer: someone who cares about users, thinks about the product holistically, and can navigate cross-functional stakeholder challenges. Not all engineers fit this profile, but those who do become dramatically more valuable.
Third, rethink how much time PMs should spend shipping versus thinking. At smaller companies, PMs might ship 30-40% of features themselves. At Anthropic's current scale, the highest-leverage use of a PM's time is probably thinking about what to build, why to build it, and helping engineers build better products. Shipping 10% more features probably matters less than improving every engineer's decision-making by 5%.
But here's the nuance Amol emphasizes: this advice is context-dependent. At a five-person startup, the PM absolutely should be shipping features. At a 500-person company with 50 engineers per PM, it's different. And sometimes, even in a large company, the PM should ship—particularly to rapidly test a hypothesis or build credibility for a controversial idea.
What's clear is that all three roles are being fundamentally reshaped by AI. The skill sets that matter are evolving. Pure execution speed matters less (because AI can help with that). Understanding user needs, thinking strategically about product direction, and maintaining organizational alignment matter more. Design craft and ability to think through complex multi-stakeholder problems are increasingly valuable.
Proven Growth Principles That Actually Work
Amol draws on his experience at Mercury to articulate a principle that should be obvious but is often violated: quality drives growth.
At Mercury, rather than chasing metrics, they spent an entire quarter improving onboarding quality. The output didn't look impressive—just a cleaner, more thoughtful sign-up flow with better explanations. But the result was a massive improvement in completion rates and downstream retention. The insight was simple: people appreciate when a company respects their time and intelligence enough to do things properly.
This principle extends to Anthropic's entire philosophy. They're explicitly comfortable leaving money on the table to preserve brand integrity, maintain quality, and ensure user experience remains excellent. This might sound counterintuitive to growth-at-all-costs practitioners. But look at the actual companies doing best: Apple, Netflix, Slack, Discord—all of them have been willing to sacrifice short-term metrics for long-term brand strength and user satisfaction.
Another principle that recurs throughout Amol's work: friction, when implemented thoughtfully, drives better outcomes. Most product teams obsess over removing friction. But Amol's insight is more nuanced: remove "annoying friction" that doesn't add value, but add friction that helps users understand the product better.
When Anthropic asks onboarding questions, that's friction. When MasterClass makes you take a quiz before purchasing a course, that's friction. When Calm makes you answer questions about your sleep habits before accessing their app, that's friction. In each case, the friction serves a purpose: it helps the company understand the user better and helps the user understand what they're about to buy.
Finally, there's the principle of what Amol calls "leaving money on the table." Growth teams often optimize every funnel to squeeze maximum value out of every user. But this is shortsighted. If you abuse users, manipulate them, or extract maximum value today, you lose the ability to engage them tomorrow. The best approach is to deliberately leave some money on the table—maintain high ethical standards, protect user experience, prioritize brand integrity—because this builds long-term trust and engagement that's worth far more than today's extra revenue.
Building Culture as a Competitive Moat
One of Amol's most profound observations is that Anthropic's secret sauce isn't the product, the research, or even the distribution—it's the culture and the people.
Anthropic's culture is obsessively mission-driven. Not in the performative sense where company values are written on a wall. But in the visceral sense where employees genuinely understand both the potential upside and serious downside risks of powerful AI systems. They work with the explicit understanding that the stakes are planetary-scale. For most people, that kind of existential weight on your work would be paralyzing. At Anthropic, it's energizing.
Amol was skeptical when he joined. He didn't know anyone at the company and worried about whether the mission was real or just marketing. Within weeks, he realized the mission was even more serious internally than externally. Everyone is fully committed. He hasn't met a single person who's "phoning it in."
The culture is also refreshingly open. Leadership shares extensively through internal "notebook channels"—personal Slack spaces that function like internal Twitter feeds. Dario, the CEO, shares his thinking publicly within the company. People respectfully challenge him. New employees have complete visibility into how thinking evolves at the top of the organization.
The talent density is staggering. You have researchers at the absolute frontier of AI. You have Instagram's co-founder running product. You have people who've led growth engineering at the best companies. You have someone who was literally a U.S. ambassador to Australia now working as an employee. The diversity of backgrounds and expertise means that cross-functional conversations hit different. Everyone brings something unique.
This matters for growth because the best ideas often come from people who understand both their domain and adjacent domains. An engineer with deep financial services background building Claude for Excel brings a unique perspective. A growth person with sales background understands activation differently than someone who's only done growth. A PM with founder experience thinks about problems at a different level.
The culture also enables transparency about challenges and failures, which is essential for learning and improving. Growth is messy. Experiments fail. Product launches disappoint. At some companies, this is swept under the rug. At Anthropic, it's discussed openly, and the lessons are extracted and shared.
Navigating the AI Safety-Growth Tension
Anthropic is structured as a Public Benefit Corporation rather than a traditional C-Corp, which is unusual for a venture-backed company. This legal structure allows the company to legally state that maximizing shareholder value is not the overarching goal. Instead, they can optimize for public benefit alongside commercial success.
This structure matters because it creates legal protection for the company to make decisions that prioritize safety, ethical considerations, and broader human benefit over immediate profits. When controversies arise about AI capabilities, safety risks, or ethical implications, the company isn't constrained by fiduciary duty to shareholders to always choose the profit-maximizing option.
From a growth perspective, this manifests in specific ways. When controversial growth ideas are proposed, Amol categorizes them into two buckets: things that violate core values and safety principles (these get rejected outright), and things that are ethically ambiguous (these are evaluated on a case-by-case basis).
The company has voluntarily delayed product launches, withheld certain capabilities, and avoided monetization strategies that might have driven higher short-term revenue because they conflicted with safety considerations. To external observers, this might look like leaving money on the table. From Anthropic's perspective, it's the price of being responsible stewards of powerful technology.
And here's the fascinating part: it's working. By prioritizing safety and ethics, Anthropic has built tremendous trust with users, enterprises, policymakers, and the public. This trust is becoming a competitive advantage. As regulation tightens and governments establish norms around AI, companies that have demonstrated ethical commitment from day one are positioned far better than those that cut corners.
The lesson generalizes: the best long-term strategy often involves leaving money on the table in the short term. This applies not just to AI safety, but to brand building, user trust, and sustainable growth more broadly.
Learning From Failure: The Founder Experience
Amol's career arc isn't a straight line to success. Early in his entrepreneurial journey, he founded a startup focused on quantifying mental health metrics to predict conditions like anxiety and depression. He raised a couple million dollars, built a team of seven to ten people, and spent three years on it. Then he had to shut it down and tell investors they'd lost their money.
He describes this as the biggest setback of his career—more painful even than the traumatic brain injury he later suffered. There's a particular shame that comes with entrepreneurial failure: letting down people who believed in you, employees who took a chance on your vision, investors who trusted you with their capital.
But Amol learned something crucial through that experience: the long-term nature of careers. The failure felt like the end of the world in the moment. But it wasn't. The skills he learned—how to think about problems, how to build products, how to communicate effectively—became the foundation for his subsequent career. Without that failure, he probably wouldn't have become a PM. He wouldn't have learned cold email. He wouldn't have had the skills that eventually got him hired at Anthropic.
This is a pattern throughout Amol's story: constraints and challenges, when navigated with resilience, often become sources of strength. He learned this most viscerally through his traumatic brain injury in 2022.
Transforming Tragedy Into Resilience: The Brain Injury Chapter
The most remarkable part of Amol's story isn't about growth hacking or product strategy. It's about human resilience in the face of catastrophic injury.
In early 2022, Amol suffered a traumatic brain injury during sparring. It was a normal training session, nothing unusual, but one hit to the head at the wrong angle changed everything. For the first two months, he couldn't work. He couldn't listen to music for more than 20 seconds without nausea. He couldn't look at screens. He couldn't shower or use the bathroom alone. His wife did everything for him, including texting friends on his behalf.
The medical reality was uncertain. It wasn't clear if he'd ever work again. He and his wife had serious conversations about what their life would look like if he didn't recover. For nine months, he slowly pushed himself—gradually increasing his tolerance to stimulation, working with doctors, doing rehabilitation—to rebuild basic functions.
The recovery was brutal and took years. And then, in mid-2023, after he'd posted his recovery story and was one month into his new role at Anthropic, he was re-injured. A bag hit his head getting off a plane. Just like that, he was off work for another two months during a critical growth period.
He's still not 100% healed. He experiences periodic dizziness and headaches that he has to manage.
What's remarkable is his perspective on this experience. Rather than seeing it as a tragedy that derailed his career, he sees it as one of the best things that could have happened. The injury forced him to implement practices—meditation, strategic breaks, avoiding caffeine, prioritizing sleep—that he believes make him significantly more effective. He does a meditation retreat at least once a year. He takes breaks throughout even the most intense workdays.
This might sound like wellness advice, but it's actually deeper than that. Amol has developed a philosophy about constraint and freedom. When you're constrained by injury, you're forced to adapt. And adaptation, done right, creates a kind of freedom—the freedom from needing everything to go perfectly, the freedom from being controlled by circumstances, the freedom to act effectively even in chaos.
He learned this through something his meditation teacher said: "True freedom in life is learning how to be content when you don't get what you want." This isn't passive resignation. It means doing everything in your power to achieve your goals—diet, exercise, treatment, work—while simultaneously accepting that you might not get the outcome you want, and being okay with that.
This mindset is relevant to startup life, to growth leadership, to any domain where there's uncertainty and failure. The people who thrive are those who can take action decisively without being emotionally destroyed if the action doesn't produce the desired outcome.
Adapting to the Exponential: Advice for the AI Era
As organizations navigate rapid AI-driven change, Amol offers concrete advice for people worried about relevance and job security:
Become expert in the tools. Use Claude. Use Claude Code. With each new model release, go back and try things that didn't work before. Many people do this once, find it doesn't work, and move on. But the tools improve so rapidly that what was impossible last month is trivial now. The people who stay current are the ones who continuously re-engage with what's possible.
Find your unfair advantage. Don't try to be good at everything. Some PMs are brilliant at cross-functional coordination and stakeholder alignment. Others are exceptional at craft and design. Others understand financial models deeply. Double down on where you naturally spike. In a world where AI is commoditizing some skills, the skills that matter most are the ones that require human judgment, context understanding, and relationship management. Find yours and go deep.
Become interdisciplinary. PMs who can also design. Engineers who deeply understand business. Growth people who understand sales. These people are increasingly valuable because they can bridge conversations that pure specialists can't. This requires deliberate investment—learning design, understanding financial modeling, studying human psychology—but the payoff is substantial.
Be adaptable. If you're still using playbooks from five years ago, you're in trouble. Amol's advice when joining Anthropic was to accept that 50-70% of what worked in previous jobs is irrelevant now. You need to be willing to completely rethink how you approach your work based on new realities. This is uncomfortable, but it's essential.
Conclusion
Anthropic's growth from $1 billion to $19 billion in 14 months isn't magical. It's the result of exceptional talent, disciplined focus, a revolutionary product, and thoughtful growth strategy executed at scale. Amol Avasare's leadership of that growth reveals principles that apply far beyond AI: the importance of activation, the power of mission-driven culture, the value of leaving money on the table, and the necessity of continuous adaptation.
The story also reveals something more personal about resilience, constraint, and what humans are capable of. Amol's career hasn't been a straight line. He's failed as a founder, suffered a devastating injury, and had to reinvent himself multiple times. Each time, he's come back stronger and more thoughtful about what actually matters.
As AI reshapes industries and roles become increasingly uncertain, his advice is clear: invest in continuous learning, find where you uniquely create value, stay adaptable, and understand that your greatest strengths often emerge from your greatest challenges. The future of work will belong to those who can think strategically, work across disciplines, and maintain their humanity in the face of accelerating change.
Original source: Anthropic’s $1B to $19B growth run: how Claude became the fastest-growing AI product in history
powered by osmu.app