Discover why AI adoption remains primitive for most people and how making AI accessible could transform education, healthcare, and society. Insights from tec...
Most People Are in the Stone Ages of AI: Why Accessibility Matters
Key Insights
- Most people utilize AI for basic tasks only, far below the technology's actual capabilities
- The primary challenge for AI companies is making powerful models accessible and useful to average consumers
- Accessibility drives Net Promoter Score—making important things cheap and quick is the real AI revolution
- AI has potential to create deflation in critical sectors like education and healthcare through administrative cost reduction
- The intersection of technology and culture shapes how humanity adopts transformative tools
The Great AI Accessibility Gap: Why Most People Are Still in the Stone Ages
It's genuinely fascinating to observe how the world has accelerated in recent years. The last few years have felt like an entire decade compressed into months—innovation, cultural shifts, and technological breakthroughs are happening at an unprecedented pace. Yet despite all this excitement around artificial intelligence, there's a profound disconnect between what AI can do and what most people actually use it for.
When you look at demonstrations of advanced AI capabilities—researchers achieving PhD-level outputs, sophisticated problem-solving, complex analysis—it's easy to assume the world is already fully embracing this technology. But here's the reality check: most people are utilizing AI for very basic tasks. They're using chatbots to draft emails, generate simple summaries, or answer straightforward questions. They're not exploring the deeper potential. They're not training it as a research partner, a creative collaborator, or a transformative tool for their work.
This isn't a criticism of everyday users—it's a commentary on where we are in the AI adoption curve. We're essentially in the stone ages of how people perceive, understand, and use these powerful systems. The gap between capability and utilization is enormous.
The fundamental issue isn't that people don't want better tools. It's that the power of AI models remains inaccessible and underutilized for the vast majority. Even as we discuss agents and autonomous systems, these technologies still feel primitive and out of reach for most individuals. The promise of AI has been heavily marketed, but the practical accessibility hasn't caught up.
Think about other technological revolutions. When the internet emerged, the real breakthrough wasn't the technology itself—it was making it cheap, accessible, and useful. When smartphones launched, the game-changer wasn't the hardware innovation—it was the simplicity of the interface and the affordability of the device. The pattern repeats throughout tech history: accessibility and affordability drive adoption.
The Missing Piece: How Accessibility Actually Changes the Game
There's a crucial insight buried in how technology actually transforms society. It's not about capability—it's about democratization. OpenAI and other leading AI labs have been grappling with exactly this problem: How do you make the power of these models more easily accessible and genuinely useful?
This is the conversation that matters. This is what will determine whether AI remains a tool for researchers and technical specialists or becomes a transformative force for humanity. The speaker in the a16z interview puts it perfectly: "The number one way you change the NPS of AI is you make important things cheap. Quickly."
Look at the economic data. Since 1970, we've seen a dramatic price diffusion chart where certain products have become essentially free. Flat-screen televisions that cost thousands of dollars twenty years ago are now available for under fifty dollars. Electronics have experienced radical deflation. But notice what hasn't followed this pattern: healthcare, education, and housing have become significantly more expensive.
This isn't inevitable. This is a choice we're making as a society. And AI is uniquely positioned to reverse this trend in critical sectors.
Consider education. The problem isn't intelligence or learning potential—it's administrative bloat. Over the past decade, the ratio of administrators to students has grown exponentially while the productivity of educators has remained relatively flat. If we restored student-to-administrator ratios to where they were just ten years ago and made professors modestly more productive through AI assistance, we could actually decrease the cost of education annually. We could achieve deflation instead of inflation.
Healthcare presents an even more striking opportunity. Approximately 45% of healthcare costs are administrative overhead. This includes revenue cycle management, back-office operations, procedural reminders, and compliance documentation. These aren't medical problems requiring breakthrough innovations in biology or pharmacology. These are coordination problems that AI can solve remarkably well. Healthcare companies are already massive consumers of AI models, recognizing this opportunity.
The math is straightforward. If AI can reduce administrative overhead in these two critical sectors by even a meaningful percentage, the impact on society would be transformative. Millions of people could access quality education and healthcare who currently cannot. But this requires a fundamental shift: moving from the narrative of AI scarcity to the narrative of AI abundance.
Technology and Culture: The Philosophical Framework for AI Adoption
Understanding how AI gets adopted requires stepping back and considering how technology and culture interact. Technology is merely a tool—a very sophisticated one, but still a tool. Culture determines how that tool gets used. Values, narratives, and collective beliefs shape whether a technology becomes liberating or constraining.
Currently, much of the AI conversation is driven by fear. In the United States particularly, there's surprisingly negative sentiment around artificial intelligence. The narratives focus on job displacement, existential risk, and dystopian scenarios. These stories are powerful. They capture attention. But they're not motivating people toward positive action.
There's a more compelling narrative available: a future of abundant resources where everyone benefits from technological progress. This is the "grow the pie" mentality that's driven Silicon Valley innovation for decades. Instead of fighting over finite resources—finite jobs, finite capital, finite opportunities—technology can expand the pie, creating abundance where scarcity previously existed.
This reframing isn't naive optimism. It's backed by historical evidence. Every major technological revolution has been followed by job displacement and job creation at a larger scale. The agricultural revolution displaced farm workers, but freed humans for other pursuits. The industrial revolution did the same. The computer revolution displaced clerical workers but created entirely new categories of work that didn't previously exist.
The difference with AI is that the pace of change is dramatically accelerated. In the SimCity metaphor that emerged in the conversation, someone hit the 100X speed button on civilization. Everything is happening faster—culture is changing faster, technology is advancing faster, social movements are accelerating. This creates real challenges in how quickly people can adapt and reorient themselves.
But acceleration doesn't change the fundamental dynamic: technology that makes important things cheaper and more accessible benefits everyone. The wealthy may have always had access to premium healthcare, education, and services. AI has the genuine potential to extend those same benefits to everyone else. Not out of charity, but through economic efficiency and scale.
From Personality Development to Ambient Intelligence: The Evolution Ahead
One of the most profound shifts in technology right now is moving from tools that merely deliver information to systems that develop personality. This is a fundamental departure from everything we've done before.
Think about Web 2.0. The primary innovation was creating delivery vehicles for human-to-human communication. Platforms like Twitter and Digg were essentially infrastructure for people to share thoughts with other people. The technology was the conduit; humans were the creators and consumers.
Now we're in uncharted territory: we're developing personalities for machines. This is what's fundamentally different about modern AI. It's not just about making things faster or processing more information. It's about imbuing systems with personality, perspective, and a unique way of engaging with humans.
This is why certain AI models feel different from each other. Claude, for instance, has a distinctly different personality from other models. It feels more like conversing with a thoughtful person than interacting with a utilitarian tool. This "artisan" quality—this sense that there's something approaching a soul in the interaction—is what sets certain implementations apart. It's the difference between a perfectly engineered tool and a crafted experience.
This sophistication is increasing with each iteration. The complexity of developing AI that thinks differently, responds differently, and approaches problems from different angles is exponentially harder than simply training a model through reinforcement learning. We're essentially asking: How do you encode not just intelligence but personality? How do you make AI that people want to spend time with?
The future of this technology likely doesn't involve chatbots at all. The exciting frontier is ambient AI—systems that seamlessly weave into the fabric of daily life, becoming almost ethereal in their presence. Imagine AI that surfaces relevant information at exactly the right moment, that anticipates your needs, that becomes so integrated into your operating system and environment that you forget you're interacting with a technology.
This is what early attempts like Google Now were pointing toward, what Apple is exploring with iOS integration, and what future operating systems will build on. The interface as we know it might eventually become unnecessary. Instead of opening an application to ask a question, you simply think a thought, and the relevant assistance appears.
Eventually, everyone will use AI. The only question is when and how. The answer to "when" is almost certainly within the next five years. The answer to "how" is what matters most—whether AI becomes an exclusive tool for the wealthy and educated or a ubiquitous resource that elevates everyone.
The Creator's Dilemma: Building What You're Genuinely Passionate About
For anyone building in this space, there's an important principle worth emphasizing: create what genuinely interests you, not what seems like the hottest trend. The speaker mentions observing founders at demo days who seemed to have reverse-engineered business ideas from whatever AI tool had just been released. They started with the technology and worked backward to find a problem.
This rarely works. The ventures that succeed long-term are built by people who are genuinely passionate about solving a particular problem. They might use AI as a tool within their solution, but the core motivation comes from the problem space itself, not the technology flavor of the month.
There's a quote from the Bhagavad Gita that captures this perfectly: you're not entitled to the fruits of your labor, only to the labor itself. Apply this to building: focus on enjoying the process, on being genuinely interested in the problem you're solving, and trust that the outcomes will follow. If you're not having fun, if you're not passionate, if you're just chasing what seems like it will make money or get traction—you probably shouldn't be doing it.
This is admittedly a privileged perspective. You need enough runway to take this approach. But for those in that position, it's worth emphasizing: your genuine interest is the real "prompt" that matters. Not the latest AI capability, not the hottest market trend, but the spark of curiosity and passion that originates in your own brain. That's what translates into something worth building.
There are different archetypes of successful builders. Some are the technically brilliant—the people who can conjure things previously unimaginable, like the creators of ChatGPT. Others are more attuned to culture and psychology, using technology as a canvas to create something new. These "gentle philosophers" of Web 2.0, like Evan Williams or Kevin Rose, understood people and culture deeply and built from that understanding.
Most successful people are artists in some sense—they have a unique perspective, a unique "brushstroke" or method. Just as you might stand close to a Monet painting and appreciate the pixel-level details, then step back and see the entire composition, creators work at different scales with different styles. The key is authenticity—building something that reflects your genuine perspective and interests.
Ownership, Abundance, and the Future of AI Sentiment
There's a radical idea worth considering: What if ordinary people owned equity stakes in AI companies? Right now, the wealth generated by AI and other AI technologies is concentrated in Silicon Valley, in the hands of venture capitalists and early employees of major companies. This concentration feeds a narrative that tech founders and companies are hoarding resources, that the benefits of technology flow upward while risks flow downward.
But imagine if a billion people had a stake in OpenAI, in Claude, in the future of AI. They wouldn't just be users—they'd be owners. They'd have a financial interest in the success of these technologies. More importantly, they'd have a psychological stake in the future. This shifts sentiment from suspicion to investment.
The power law outcomes driven by the internet have created unprecedented wealth concentration. This acceleration of wealth disparity is real, and it's creating real social friction. But the solution isn't to restrict technology or prevent people from accessing tools like AI. The solution might be to democratize ownership.
Consider the irony: Massachusetts once made it illegal to buy Apple stock because it was deemed too speculative during the IPO. This was intended as "protection" for consumers. But it fundamentally underestimated the average person's intelligence and judgment. Given access to tools and information, people generally make smart decisions about their own lives.
Similarly, New York State is considering making it illegal to give or receive financial or health advice via AI. Again, this is framed as protection. But it's actually an "own goal"—it prevents people without access to expensive lawyers and doctors from getting better advice and guidance. The average person is smarter than regulations give them credit for.
The real moonshot for AI companies should be making education and healthcare demonstrably cheaper in the next five years—not through subsidies or charity, but through genuine efficiency improvements. When that happens, when ordinary people experience AI making their lives concretely better in measurable ways, sentiment will shift. Fear will be replaced by enthusiasm. Skepticism will become belief.
This is the opportunity in front of the industry right now. It's not about scaling model capabilities further. It's about accessibility. It's about making important things cheap. Quickly. It's about building products that ordinary people find genuinely useful and delightful. It's about extending the benefits of technology to everyone, not just the privileged few.
The conversation about AI and the future is ultimately a conversation about values—about what we prioritize and what kind of future we want to build. The technology itself is neutral. The culture we build around it will determine everything.
Conclusion
We're at the most fascinating inflection point in technology's history. AI capabilities are advancing faster than ever, yet most people are still in the stone ages of utilizing these tools. The gap between what's possible and what's being used is the real opportunity.
The path forward requires three simultaneous shifts: making AI dramatically more accessible and affordable, shifting narratives from fear to abundance, and building products that genuinely serve ordinary people. When we focus on making important things cheap—when education and healthcare become demonstrably cheaper and better through AI—the skepticism will fade. The sentiment will turn positive not because of clever marketing, but because people will experience the benefit directly.
The greatest achievement of technology isn't creating more capable tools—it's helping humanity understand itself better, expand our potential, and extend opportunity to everyone. We're just getting started. The real opportunity lies ahead, for those building, investing, and participating in this transformation.
Original source: Signüll: Most People Are in the Stone Ages of AI | The a16z Show
powered by osmu.app