Discover how AI pricing is reversing from cheap tokens to premium models. Explore Jevon's Paradox vs Veblen goods and what it means for your business.
The Great AI Pricing Reversal: From Jevon's Paradox to Veblen Goods
Key Takeaways
- Token prices dropped 10-20x in 18 months, but demand surge is pushing toward premium pricing models
- Claude Mythos pricing rumors suggest 5-6x cost increase over current flagship models like Claude Opus
- Jevon's Paradox is ending – companies will stop optimizing for cheap inference and deploy capital for maximum capability
- Balance sheets become competitive moats – access to premium AI models will determine market winners
- The AI capability gap will widen dramatically between well-capitalized companies and those unable to afford cutting-edge models
Understanding the AI Economic Shift
For the past 18 months, the artificial intelligence industry has operated under a single economic principle: Jevon's Paradox. This economic law states that as a product becomes cheaper, consumption increases exponentially. We've witnessed this dynamic firsthand. Token prices—the cost of processing information through AI models—have plummeted by 10-20x, and demand has responded explosively.
The revenue numbers tell the story. Anthropic surged past $19 billion in annualized run-rate last month, jumping from just $9 billion at the end of 2025. Meanwhile, OpenAI topped $25 billion in annualized revenue in February 2026, representing a staggering 17% increase in just two months. These growth rates are unprecedented in enterprise software, driven almost entirely by the affordability of AI services and the rush to integrate them into every product and workflow imaginable.
However, an accidental data leak this past weekend has revealed something that could fundamentally reshape this dynamic. Anthropic's secretive Claude Mythos model—accidentally disclosed through a leaked blog post—represents what the company describes as "a step change" in capability. Early reports suggest dramatically higher scores on software coding, academic reasoning, and cybersecurity testing compared to existing flagship models.
But here's where the economic shift becomes critical: Anthropic explicitly stated that Mythos will be "very expensive to serve" and "very expensive for customers." Industry speculation points to inference pricing that could be ** 5-6x higher than current models**. This represents a fundamental departure from the trajectory of the past two years.
The Death of Token Maximization Strategy
To understand the implications, consider the current pricing landscape. Claude Opus 4.6, Anthropic's current flagship model, costs $5 per million input tokens and $25 per million output tokens. OpenAI's GPT-4.5 is significantly cheaper at $2 for input and $8 for output. But leaked documentation suggests Claude Mythos could cost between $15-25 per million input tokens and a staggering $75-150 per million output tokens—potentially 6 times more expensive than what companies are paying today.
This pricing structure signals a profound economic transition. For eighteen months, the optimization imperative has been clear: minimize token consumption, reduce model complexity, and maximize efficiency. Developers built lighter prompts. Companies fine-tuned workflows to squeeze maximum value from cheapest inference. The entire industry race was toward more efficient, not more capable AI.
The rumored Mythos pricing obliterates this strategy. Companies optimizing for cheap inference will suddenly face an impossible choice. Consider a Series A founder building an AI-powered coding assistant. Her current burn rate assumes Claude Opus 4.6 pricing. She's calculated unit economics, projected runway, and structured her pricing model around $25 per million output tokens.
If Mythos launches at $150 per million tokens—6 times higher—she has three options: raise her product prices (risking customer churn), raise additional capital (diluting equity and extending runway pressure), or watch AI-native competitors build features 10 times faster using Mythos while she remains constrained to older, cheaper models. For a bootstrapped startup or a company with limited fundraising access, this becomes an existential decision.
This dynamic extends far beyond individual startups. GPU and memory shortages are already acute. Industry experts have warned about capacity constraints for months. The next-generation model rumors suggest pricing that moves in the opposite direction of current trends. We're entering an era where the most powerful AI models won't be the cheapest—they'll be the most expensive, creating a new economic dynamic never before seen in software.
Welcome to Veblen Goods Territory
This is where Veblen goods enter the picture. Named after economist Thorstein Veblen, these are luxury products whose demand increases as prices rise. They violate traditional economic law. The higher the price, the more desirable they become. The classic examples are illuminating: front-row concert tickets that cost 10 times more than back-row seats despite objectively worse acoustics. Nike Air Jordan sneakers that retail for $110 but resell for $500+ on the secondary market. Ivy League university tuition, where selectivity and exclusivity ARE the value proposition.
For decades, technology followed the opposite trajectory. Moore's Law meant computing power got cheaper and better every 18 months. Cloud computing made infrastructure a commodity. Open-source software disrupted premium vendors by offering free alternatives. The entire digital revolution was built on dematerialization and commoditization—making powerful tools cheap and accessible.
But AI is reversing this. The most capable AI models might actually become status symbols—proof that your company has the capital, the infrastructure, and the strategic vision to afford cutting-edge intelligence. A Series C company using Claude Mythos signals to investors, employees, and customers that it has resources and confidence. A Series A company stuck on Opus 4.6 signals resource constraints.
More importantly, if Mythos-class models genuinely deliver "a step change" in capability—dramatically better at coding, reasoning, and security—then using them isn't a luxury, it's a necessity for competitive advantage. The company that can afford to deploy Mythos will build features faster, ship more reliably, and iterate more intelligently than competitors using cheaper alternatives. The capability gap becomes a market gap.
The New AI Competitive Landscape
The implications ripple across every industry. In AI-native software startups, the company with capital to access the most powerful model wins. How much is that advantage worth? If a Mythos-enabled team can build 10x faster than competitors on Opus 4.6, the difference might be shipping a complete product in 6 months versus 18 months. In fast-moving markets, that's the difference between market leadership and irrelevance.
In enterprise software, balance sheets become a new moat. The most profitable companies—or those who can raise capital cheaply—will have the biggest competitive advantage in their industries. They can afford to deploy capital aggressively, both in GPU infrastructure and in dollars, to maximize capability rather than minimize cost. A Fortune 500 company with billions in annual revenue can absorb premium AI pricing. A mid-market competitor cannot.
For AI infrastructure companies, the shift is profound. GPU manufacturers like NVIDIA, AMD, and others face unprecedented demand. Data center operators need to upgrade capacity. Cloud providers must decide whether to pass premium pricing to customers or absorb the cost. The entire infrastructure stack becomes a bottleneck.
The broader economic consequence is stark: if AI-native companies can indeed build 10 times faster with Mythos-class models while competitors remain stuck on cheaper alternatives, valuations will diverge further. The gap between AI leaders and laggards won't close—it will accelerate. Companies unable to respond quickly enough or unable to afford the most sophisticated AI will find themselves progressively disadvantaged.
The End of the Cheap Token Era
This represents the end of what we might call the "token-maxxing era"—the period where optimization focused entirely on reducing costs per unit. Companies stop thinking about efficiency and start thinking about capability. The era of "how do I get the best results with the cheapest model" gives way to "what's the cost to access the best model in the world, and what's that worth to me?"
The economic logic is compelling. If accessing a Mythos-class model allows your team to build a $100 million product in 18 months instead of 4 years, and that 30-month advantage is worth $500 million in present value, then paying premium pricing for the best model is a rational investment, not an expense to minimize.
This logic breaks down only for companies without access to capital or profit margins. For bootstrapped startups, for companies in low-margin industries, for organizations with limited technical talent, premium AI pricing creates a widening moat. They remain locked out of the best models, forced to compete with inferior tools against better-capitalized rivals.
What This Means for AI Adoption Strategy
The strategic implications are profound for companies of all sizes. If you're building AI-native products, capital becomes currency. The ability to raise additional funding, the profitability of your core business, or the support of well-capitalized investors determines your access to cutting-edge models. Technical talent remains important, but access to premium AI—whether through capital or strategic partnerships—becomes the binding constraint.
For enterprise companies, the shift means AI budgets become investment categories rather than cost centers. Instead of asking "how much should we spend on AI tools," the question becomes "what's AI worth to our competitive position, and what models do we need to stay ahead?" Premium pricing stops feeling like an obstacle and starts feeling like an opportunity to invest in differentiation.
For open-source enthusiasts and advocates of democratized AI, the trend is sobering. The concentration of capability in expensive, proprietary models creates a two-tier system. Well-capitalized companies access the frontier. Everyone else falls behind. This contradicts the open-source movement's egalitarian vision but aligns with how every transformative technology actually distributes itself—first to the wealthy, eventually to everyone.
The Uncertain Future of AI Economics
Here's the crucial uncertainty: we don't know yet whether the Mythos rumors are accurate. The data leak was accidental. Anthropic could clarify, deny, or confirm. Pricing might be different than rumored. The model's actual capabilities might not justify the price premium. Market dynamics could shift.
But the direction is clear. The age of ever-cheaper AI is ending. GPU scarcity, model complexity, training costs, and infrastructure demands are all moving upward. The next generation of models will require more compute, more energy, and more capital to serve. Basic economics suggests premium pricing becomes inevitable.
When Jevon and Veblen walked into a data center 18 months ago, only one could win. Jevon's Paradox suggested infinite demand for cheap intelligence. Veblen goods suggested elite demand for exclusive capability. The past year-and-a-half proved Jevon right—cheap tokens drove explosive growth. But the next chapter might belong to Veblen. Premium models might drive competitive advantage and valuations for those who can afford them.
Conclusion
The AI industry stands at an inflection point. After 18 months of declining token prices and explosive demand growth following Jevon's Paradox, we're entering a new era where premium AI models will command premium prices—and companies will pay them. The leaked Claude Mythos pricing rumors suggest a 5-6x cost increase, signaling the end of the token-minimization era. ** Balance sheets will become competitive moats** as well-capitalized companies gain access to cutting-edge models that dramatically outperform cheaper alternatives. The capability gap will widen between AI leaders and laggards, reshaping competitive dynamics across every industry. For your organization, the strategic question is clear: ** does your capital structure, profitability, or fundraising ability position you to access the best AI models when they arrive?** Because in the next phase of AI competition, access to premium intelligence might matter more than the talent to use it.
Original source: Veblen & Jevon Walk Into a Data Center
powered by osmu.app