Discover how BCIs are revolutionizing vision restoration, longevity, and medicine. Learn from Max Hodak on the future of neural engineering and why BCIs matt...
Brain-Computer Interfaces: The Future of Human Enhancement & Healthcare
Key Insights
- Vision Restoration Breakthrough: More than 40 patients have received BCI treatments restoring sight through a 2mm silicon chip implanted under the retina, creating the first coherent form vision ever achieved in a patient's mind's eye.
- Medical Paradigm Shift: Neural engineering approaches are fundamentally changing healthcare by treating the brain—the only organ that truly matters—rather than pursuing incremental pharmaceutical solutions.
- Biohybrid Neural Interfaces: The next frontier involves growing biological connections between engineered neurons and the brain, creating ultra-high bandwidth interfaces without invasive wiring.
- Longevity and Enhancement: BCIs represent a longevity story more than an AI merger narrative, enabling restoration of lost capabilities and eventually offering enhancements that healthy individuals might envy.
- Convergence with AI: Artificial intelligence and neuroscience are unifying through latent space representations, revealing that AI models think similarly to biological brains—a validation that BCIs can truly interface with human cognition.
What Are Brain-Computer Interfaces and Why They Matter
Brain-computer interfaces represent a fundamental reimagining of how medicine approaches human capability and longevity. At its core, a BCI is a technology that creates a communication pathway between the brain and external devices, bypassing traditional biological constraints. Unlike the common misconception that BCIs are primarily about merging humans with machines for enhanced intelligence, they are first and foremost a healthcare and longevity story.
The brain, despite being humanity's most powerful biological computer, is isolated by the skull with only limited connections to the world: twelve cranial nerves and thirty-one spinal nerves serve as the "API" through which all sensory input and motor output flow. These pathways function like cables connecting a computer to its peripherals. If you can control the signals transmitted through these neural pathways, then the reality perceived by the brain—everything a person sees, hears, feels, and experiences—is determined entirely by the electrical spikes traveling along these nerves. This perspective fundamentally reframes what's possible in medicine and human enhancement.
Current BCI applications focus primarily on restoring lost functionality to patients with severe conditions. For patients who have gone blind, gone deaf, or become paralyzed, BCIs offer the possibility of restoration—not enhancement of healthy people, but recovery of capabilities that disease or aging has stolen. This distinction is crucial because it changes the risk-benefit calculation entirely. A patient who has been unable to see for a decade and is offered the chance to read letters on an eye chart for the first time in years will accept a surgical procedure that a healthy person would never consider.
The deployment timeline for BCIs follows a logical progression. Initially, these technologies will serve the most disabled patient populations—those with the greatest need and the most to gain from even basic restoration. As devices become more powerful and can access richer, bidirectional neural representations from larger portions of the brain, the risk-benefit equation gradually shifts. Eventually, as aging-related decline becomes universal and the devices improve sufficiently, many people will reach a critical age where restoration therapy makes sense. Only after this point—when devices become significantly more capable and proven safe—might enhancement applications emerge for those seeking capabilities beyond their natural baseline.
The Breakthrough in Vision Restoration: How Prima Works
The most immediate and tangible BCI achievement is Prima, a retinal prosthesis that has demonstrated remarkable success in restoring vision to blind patients. This technology emerged from rigorous first-principles thinking about how the eye processes visual information and where in that cascade of processing the best intervention point exists.
The human retina contains approximately 150 million photoreceptor cells—rods and cones—that detect light. These feed into 100 million bipolar cells, which perform critical computational processing on the visual signal. Finally, 1.5 million retinal ganglion cells transmit the processed signal to the brain via the optic nerve. This three-layer architecture represents an engineered solution that nature has perfected over millions of years: the eye is not simply a camera. Rather, it performs substantial computation, compressing and extracting features from raw light before sending signals to the brain.
Prima is a 2-millimeter by 2-millimeter silicon chip implanted beneath the retina, directly stimulating the bipolar cell layer. The chip functions as an array of microscopic solar panels. Patients wear glasses containing a camera that observes the world and a laser projector that projects images onto the implanted chip. Wherever the laser strikes, the solar panel absorbs the light and directly excites the bipolar cells above it. This approach bypasses the dead photoreceptors—the rods and cones that have been destroyed by diseases like macular degeneration—and reinjects visual signals at the optimal computational layer.
The clinical trial results published in the New England Journal of Medicine demonstrated unprecedented success. Patients who had been unable to see faces for a decade could suddenly read every letter on an eye chart. This represents the first time in history that coherent form vision has been created in a patient's mind's eye through direct neural stimulation. Previous attempts with electrical stimulation at other retinal layers, such as the work of Second Sight a decade ago, could only generate phosphenes—flashes of light that patients could recognize as separate entities but that the brain could not assemble into coherent images.
Why did Prima succeed where earlier approaches failed? The answer lies in understanding retinal processing layers. When stimulating the optic nerve cells (the 1.5 million ganglion cells), the signal has already been compressed and abstracted by the retina. Each ganglion cell doesn't represent a single pixel of light but rather complex features like edges, motion direction, or color gradients. Attempting to stimulate these cells to generate vision is like trying to write a high-level computer program by directly manipulating machine code. The brain cannot easily decode it.
In contrast, the bipolar cells represent the pre-compression layer, where the signal still maintains a correspondence to the raw image. By stimulating at this layer, the visual information retains the structure necessary for the brain to process it coherently. This breakthrough validates the principle that understanding neural processing architecture—truly grasping the "API" of the brain—enables profound interventions.
The diseases that prima addresses are staggering in scale. Age-related macular degeneration alone affects 200 million people globally, with the severe form impacting 1-2 million people. Retinitis pigmentosa, Stargardt's disease, diabetic retinopathy, and other conditions destroying photoreceptors affect millions more. One of Prima's advantages is that the company remains largely agnostic to the specific cause of photoreceptor loss. Whether the rods and cones died from genetic mutation, age-related degeneration, or diabetes, the solution is the same: reinject the visual signal at the bipolar cell layer. This disease-agnostic approach means the technology can potentially address numerous conditions simultaneously.
Looking forward, the company envisions achieving near-normal 20/20 vision within the next decade. This would involve expanding the field of view, adding color vision, and increasing the resolution beyond current capabilities. The path to this goal is clear, though technically challenging. The ultimate vision is to restore sight that is functionally equivalent to normal human vision—not as a miraculous novelty, but as a reliable medical treatment.
Plasticity, Learning, and How the Brain Adapts to BCIs
One of the most profound questions about BCIs concerns neuroplasticity: Can the brain learn to use these interfaces? Must children receive them while their brains are still malleable? Can adult brains, ossified by years of fitting to reality, truly adapt to novel inputs? The answer, supported by decades of neuroscience research, is more nuanced and more hopeful than popular understanding suggests.
The brain does indeed exhibit "critical periods"—windows during early development when specific connections must be established or they become extremely difficult to wire later. The most dramatic example is congenital cataracts: infants born with cloudy lenses who fail to develop clear vision during the first months of life suffer permanent visual impairment even after surgical correction in adulthood. Their brains never learned to make sense of visual input because they never received it during the critical period.
However, this understanding has led to a widespread misconception that adult brains are largely fixed and plastic only during childhood. The biological reality is profoundly different. The adult brain remains remarkably plastic throughout life, far more so than commonly appreciated. The difference is not that adults cannot learn—obviously they can and do—but rather that adult brains exist in stable "attractor states," having settled into configurations that effectively match external reality.
Consider the brain as an energy landscape with hills and valleys. During development, the brain descends into a deep basin in this landscape, fitting itself to the regularities of the world. Once settled, external stimuli must be quite dramatic to push the brain out of this stable state. A healthy adult in a normal environment remains in this basin because it works well and represents an energy minimum. This state is not permanent, however. It is selected for through evolution because the tradeoff—stability versus flexibility—serves us well in normal circumstances.The plasticity is still present, but it's not obvious because the brain is in an equilibrium state. Interestingly, one theory suggests that psychedelic drugs work by adding energy to this system, allowing the brain to access other configurations temporarily. When the drug wears off, the brain simply descends back into its original basin. But the plasticity never disappeared.
In the context of BCIs, this principle becomes powerful. If you place an electrode almost anywhere in the cortex, wake the patient during surgery, and show them a flashing light that correlates with the firing of a single neuron, something remarkable happens: within minutes, they learn to control that neuron. The brain, when provided with feedback about what its activity is doing, demonstrates extraordinary plasticity under learning conditions.
Some of the earliest motor BCI experiments illustrated this principle. Rather than decoding what the brain was originally representing, researchers simply fixed the weights arbitrarily. They told the patient: "When this neuron fires more, move the cursor up. When that neuron fires more, move it down." The brain figured it out. The brain learned how to control those neurons to achieve the desired cursor movement, even though the mapping was arbitrary. This reveals something fundamental: the cortex is exquisitely good at extracting meaning from any information provided to it, and learning occurs when two systems—the brain and the external device—can learn off each other bidirectionally.
For Prima and other visual BCIs, the implications are significant. Blind patients who have spent years in darkness report hallucinations and internally generated percepts as their brains, deprived of input, turn up the "gain" on spontaneous activity. When the Prima implant is first activated, patients need a period of rehabilitation to learn to distinguish real visual signals from these phantom percepts. In initial experiments, researchers would pulse the laser on while playing a tone. After a few pairings, they would play the tone without the laser, and patients would report seeing the flash. The brain had learned the association. With brief rehabilitation, patients learn to discriminate the real implanted signal from their brain's self-generated noise.
The qualia—the subjective experience—of Prima vision is notably normal, at least in its basic character. Patients report black-and-white vision with a limited field of view, but it is genuine sight, not a strange artificial sensation. The deeper question of what ultra-high bandwidth biohybrid interfaces would feel like remains impossible to fully imagine. But nature provides a hint through the rare case of conjoined twins connected at the thalami. These individuals have four brain hemispheres but only one skull, with a single biological cable connecting the two brains. They can share meaningful elements of conscious experience, including visual information, yet remain aware they're experiencing something from another perspective. This extraordinary case suggests that consciousness and perception remain flexible in ways we're only beginning to understand.
The Convergence of Neuroscience and AI: Two Fields Becoming One
One of the most significant developments in both neuroscience and artificial intelligence is their unexpected convergence. A decade ago, the prevailing assumption was that AI researchers would learn from neuroscience, studying how brains solve problems and then implementing those insights in silicon. The reality has largely inverted: AI research is now teaching neuroscience.
This convergence centers on the concept of latent spaces and neural representations. When researchers train AI models—whether image recognition networks or large language models—the internal representations that emerge have a striking resemblance to the representation found in biological brains. In neuroscience, researchers observed that neurons in various brain regions form geometric objects representing abstract concepts. The visual cortex contains not simple pixel representations but complex feature maps. Deeper in the brain, in regions like inferotemporal cortex, scientists discovered something even more abstract: a map of object space—a multidimensional manifold where each point represents a possible object the brain might identify.
A point in this manifold corresponds to a vase, another to the Eiffel Tower, another to a zebra, another to a human face. As you move through this space, the percept changes continuously. Millions of neurons collectively encode this space, enabling the brain to think about and recognize any object within its learned repertoire. This is exactly analogous to latent space in artificial neural networks. When an AI model learns to recognize images, its hidden layers don't store explicit pixel-to-label mappings. Instead, they develop abstract representations where similar objects cluster near each other in a high-dimensional space.
The implications of this discovery are profound. It suggests that brains and artificial neural networks solve information processing problems in fundamentally similar ways. Both use distributed representations, both organize information hierarchically with increasingly abstract features at deeper layers, and both converge on similar mathematical structures. This is not a superficial similarity but a deep alignment in how information is processed.
For BCIs, this convergence is extraordinarily promising. It means the "API" to the brain—the interface points where external devices can read and write information—is becoming increasingly visible and understandable. The neural activity recorded from a brain, when properly decoded using the same mathematical frameworks that work in AI, can be translated into actionable information for external devices or even other brains.
Many researchers from neuroscience have transitioned to AI work, discovering that they're essentially still doing neuroscience, just on models that are far easier to work with than actual animal brains. You can record from every neuron in an artificial neural network, manipulate individual parameters, run experiments at the speed of computers rather than the speed of biological systems. The insights transfer directly back to understanding biological brains. This feedback loop—neuroscience informing AI, AI providing tools to understand neuroscience—represents a genuine scientific revolution.
The criticism sometimes leveled at large language models—that they're "stochastic parrots" or "glorified autocompletes"—reveals a fundamental misunderstanding. These same critics often don't appreciate that they're describing something that's essentially what the brain does as well. The brain is a prediction machine, trained on vast amounts of data, learning statistical patterns and using them to generate behavior and experience. If an AI model is a "stochastic parrot," then so is a human engaged in conversation. This realization doesn't diminish the accomplishment; it illuminates both the capabilities and limitations of neural systems, biological and artificial alike.
Biohybrid Neural Interfaces: The Next Frontier
While Prima represents the current state of the art in vision restoration, the longer-term vision at Science involves biohybrid neural interfaces—fundamentally different from electronic probes or chips. This represents a paradigm shift in how BCIs could be designed and deployed.
The core idea draws inspiration from nature. The human brain has two hemispheres connected by the corpus callosum, which contains approximately 200 million neural fibers. Despite this physical separation, the brain experiences itself as a unified consciousness, integrating information from both sides seamlessly. This raises a profound engineering question: What if you wanted to build an ultra-high bandwidth brain-to-brain connection? How would nature do it? The answer is clear: grow a nerve.
Science's approach involves culturing neurons derived from engineered stem cells directly on a biocompatible implant. These cells are deliberately engineered to be "hypoimmunogenic"—hidden from the immune system—eliminating the need for immunosuppression or patient-specific customization. In laboratory conditions, neurons naturally grow together and form new biological connections, establishing synaptic links and functional circuits. The company loads these engineered neurons into a device, allows them to establish connections in a bioreactor, and then engrafts the entire device onto the brain surface.
This biohybrid approach offers several advantages over direct electrode insertion. First, it provides a more reversible intervention. If the grafted neurons don't function as intended or complications arise, the device can be removed. In contrast, placing electrodes directly into the brain tissue causes damage that cannot be easily undone. Second, the biological interface may enable higher bandwidth connections than electronics alone. Living neurons can form thousands of synaptic connections, creating density of information flow that electronic contacts struggle to achieve.
The conceptual model Max Hodak uses is inspired by James Cameron's Avatar films. In those movies, the Na'vi aliens have biological "queues"—organic neural connectors—hanging from the backs of their heads. These queues plug directly into other organisms and environmental systems, creating seamless neural integration. For humans, the biohybrid interface would function similarly: a biological conductor grown as a new cranial nerve, with a connector at its end, ready to integrate the human brain with external devices or even other brains.
This is clearly not imminent technology. The pathway from concept to human trials involves five to seven years of research and development, assuming everything proceeds successfully. Many potential pitfalls could derail the approach. But the company has already demonstrated proof of principle in animal models, showing that engineered neurons can integrate with existing brain tissue and form functional connections.
Closely related to this vision is work on optogenetics—the ability to genetically engineer neurons to respond to light. Science has developed breakthrough optogenetic proteins so sensitive that they respond to ordinary office lighting, far brighter than previous versions required. By expressing these light-sensitive proteins in specific neuronal populations and targeting them with infrared lasers, you could potentially establish extremely precise, high-bandwidth communication. However, this approach still requires genetic modification of the patient's neurons, raising safety and reversibility concerns that the biohybrid approach avoids.
Expanding the Vision: Vessel and the Broader Healthcare Revolution
Beyond the work on vision and neural interfaces, Science is developing technology called Vessel, focused on perfusion—the artificial circulation of blood and oxygen to organs. This might seem like a departure from neural engineering, but it represents the same first-principles thinking applied to a different critical problem in medicine.
The genesis of Vessel came from a medical case that haunted Max Hodak. A seventeen-year-old in Boston waiting for a lung transplant was kept alive using an ECMO (extracorporeal membrane oxygenation) circuit—essentially an artificial heart-lung machine. When complications made him ineligible for a transplant, doctors faced an ethical dilemma. The young man was alive, playing video games, doing homework, seeing friends. But taking him off the circuit would cause immediate death. The ICU support consumed half a million dollars monthly. The medical literature surrounding this case revealed something striking: dozens of papers debating ECMO as a "bridge to nowhere," with doctors actively discouraging families from pursuing it because it supposedly raised fairness questions about resource allocation.
Hodak's insight was simple: Why not consider it as "destination therapy" rather than "bridge therapy"? If the technology isn't good enough for permanent use, improve it. The response he received was, surprisingly, shouting and dismissal. Something felt deeply wrong about this reaction. The technology wasn't working, but the response suggested the problem wasn't technical but rather institutional and conceptual.
Upon investigation, Hodak discovered that a related technology—normothermic machine perfusion (NMP)—had revolutionized organ transplantation. Twenty years ago, kidney and liver transplants had to happen immediately after organ procurement because organs degrade rapidly when removed from circulation. Now, with perfusion technology, organs can be kept viable for extended periods, allowing transplants to be scheduled at optimal times rather than in the middle of the night. Over 75 percent of liver transplants in the United States now use machine perfusion.
The existing systems for this application, however, are extraordinarily expensive—around $500,000 each—and so large that they can only be transported by private jet. One company in the space has a private jet business larger than its medical device business. This gap between scientific capability and practical deployment screamed for engineering solutions.
Science's goal is to refine perfusion technology to the point where a kidney could be transported as luggage on a commercial flight. Imagine if that seventeen-year-old could have taken an advanced perfusion device home as a backpack, maintaining organ function in the comfort of his residence rather than in an ICU. Or imagine organ transplants being routinely scheduled as appointments rather than emergencies.
The technical challenges are substantial. Just as with neural implants, you need to solve the problem of how skin closes around tubes connecting to blood vessels without creating infection risks. You need to develop systems that are portable, power-efficient, and capable of maintaining biological tissue function for weeks or months. But the problem is real, the unmet need is massive, and the engineering solutions are clearly possible.
Hodak sees the three projects at Science—Prima for vision, biohybrid neural interfaces for expanded cognition and communication, and Vessel for organ perfection—as manifestations of a single vision: reframing medicine as engineering human biology rather than as chemistry applied to disease. Together, they address the fundamental question: What does it take for a human to have an excellent quality of life in the future? The answer involves not just staying alive but being able to see, hear, move, think, and remain integrated into society.
From Software to Hard Science: The Journey of a Technical Founder
Max Hodak's path to founding Science illuminates both the opportunities and challenges of building advanced technology companies at the intersection of biology and engineering. His background is primarily software—he grew up programming and remains most comfortable with computational thinking. Yet he has dedicated his career to solving problems in neuroscience and biotechnology that require expertise across multiple disciplines.
The origin of this focus traces to childhood fascination with science fiction, particularly The Matrix. The concept of a simulated world so rich and detailed that it would be indistinguishable from reality led to a profound insight: if such a world were possible, the brain—the system interpreting that world—must be the most critical component. Everything else could be considered replaceable or engineerable. This abstraction, that the brain is essentially the interface between a conscious entity and reality, became foundational to his thinking.
In college, Hodak studied biomedical engineering but gravitated to primate neuroscience, specifically brain-computer interface research. He spent most of his undergraduate years in a Duke University lab working on closed-loop decoding—using neural signals to control external devices and observing how brains learn to operate these interfaces. This gave him hands-on experience with the fundamental problems of BCIs years before they became a venture-funded focus area.
After college, he founded Transcriptic, a cloud robotics laboratory for scientists. The idea emerged from frustration with the tedium of experimental biology: in his synthetic biology work, he had to manually press buttons on laboratory instruments every three hours for days to conduct an experiment. In software engineering, this would be immediately automated. AWS and cloud computing were emerging, and Hodak saw the obvious solution: centralize expensive laboratory equipment in a facility with robotic arms, provide APIs for remote researchers, eliminate the need for each scientist to purchase millions of dollars in equipment and perform manual tasks.
Transcriptic raised significant funding and achieved millions in revenue before Hodak stepped down as CEO in 2017. But looking back, he acknowledges that the experience, while valuable, was grueling. The company reached an interesting stage but never achieved the transformation he had envisioned. The period from 2012 to 2016 was what Ben Horowitz calls "the struggle"—the phase where a company must survive operating at the edge of its capabilities, where progress feels glacially slow, and where personal exhaustion is constant.
This experience provided crucial context for what came next. When Sam Altman of Y Combinator introduced him to Elon Musk regarding a new brain-computer interface company in early 2016, Hodak initially thought of MIT contacts who might be interested. Then he realized: he should do this. Within months, Neuralink had formed around a core group of exceptional people, many of whom Hodak knew from Duke, including Tim Hanson, who had conceived of using thin-film polymers as neural probes—a technical direction that eventually became central to Neuralink's approach.
At Neuralink, Hodak experienced what he describes as "the ultimate startup PhD." The company required assembling a multidisciplinary team, managing complex technical challenges, raising capital at an immense scale, and executing on a timeline relevant to national importance. Unlike Transcriptic, where the problem was partially one of market adoption and business model, Neuralink faced deeper technical hurdles: How do you design electrodes that don't damage brain tissue? How do you implant them with surgical precision? How do you ensure biocompatibility and longevity? How do you build a fully implanted system that doesn't require exposed wires running through the scalp?
Neuralink's critical innovation was recognizing that advances in consumer electronics—what Hodak calls "the smartphone dividend"—had created components with the precise characteristics needed for BCIs. Apple, Samsung, and other smartphone manufacturers had poured enormous resources into developing power-efficient, miniaturized electronics specifically to enable mobile computing. These components, originally designed for different purposes, turned out to be exactly what BCIs needed: electronics small enough to fully implant and power-efficient enough that they wouldn't generate dangerous heat.
The other key insight was overcoming the connector problem. Previous neural interface devices required a percutaneous connector—literally, a hole through the skin with wires protruding through it. As long as the skin is open, there's a constant infection risk. Living cells automatically migrate toward and around any foreign object; a percutaneous connector creates a direct pathway for bacteria to climb down the wires into the brain. By achieving a fully closed implant with wireless power, the skin could fully heal, eliminating this critical infection vector.
These aren't revolutionary insights in a theoretical sense. But executing on them at the level of sophistication required for human implantation demanded exquisite engineering, relentless focus, and the resources and team coordination that only an Elon Musk-scale operation could muster.
Reflecting on this journey, Hodak offers advice to others considering similar paths. The first is fundamental: you need a clear sense of what you want to do and extraordinary agency in pursuing it. He knew from college that he wanted to work on BCIs and systematically positioned himself to make that happen. But this clarity only pays off if you follow through with genuine persistence—finding backdoors and unconventional paths when direct routes are blocked.
The second insight is perhaps more subtle: sometimes, working for someone extraordinary is more valuable than attempting to build something entirely yourself. Many entrepreneurial people instinctively want to found their own company, and sometimes that's correct. But Hodak now believes that working at Neuralink under Elon's direction taught him more than he would have learned in years of building other companies. The experience provided not just technical knowledge but deep understanding of how to execute at the highest level.
The advice he would give his 2016 self is this: If a truly exceptional opportunity appears—a chance to work with someone of extraordinary capability on a genuinely important problem—it may be worth deferring personal founding ambitions. The knowledge gained and the networks built can provide a stronger foundation for future endeavors than experience as a young founder. But this only applies if the opportunity is genuinely exceptional. Hodak estimates that such opportunities appear rarely in most people's careers.
The Path to 2035: An Impenetrable Fog of Possibility
When asked to predict the next decade of BCI development, Max Hodak describes an "event horizon" at 2035. Beyond that point, he admits, the future becomes an impenetrable fog. He cannot see clearly beyond it because current technological change is fundamentally altering what's possible.
In the next few years, the trajectory is relatively predictable. Vision restoration will improve, approaching normal acuity. Hearing restoration through cochlear implants is already clinically standard; improvements will expand range and fidelity. Motor control devices will increase bandwidth, providing paralyzed patients with much better cursor control, typing speed, and eventually control of external robotic systems. Balance and proprioception can be restored. These are engineering problems with clear pathways to solution.
But beyond these near-term developments lies genuine uncertainty. The biohybrid neural interfaces, if successful, could enable bandwidth increases of orders of magnitude. If the brain and engineered neurons can develop biological connections with density approaching natural synapses, the information transfer rate could eventually match or exceed what natural neural connections achieve. At that point, what becomes possible is genuinely difficult to imagine.
Consider what happens when bandwidth increases dramatically. Today's motor BCIs operate at roughly 10 bits per second—far below what the motor cortex actually encodes. As bandwidth improves to 1,000 bits per second or beyond, the possibilities shift from "restore basic capability" to "enable entirely new forms of human experience and cognition." A brain-to-brain interface at such bandwidth would be qualitatively different from all previous human communication technologies. Writing, speech, even video—all have bandwidth limitations. Ultra-high bandwidth BCIs would enable direct transmission of percepts, thoughts, and experiences.
The philosophical implications are staggering. Consciousness itself remains poorly understood. We have no objective way to measure it in others; we can only experience our own consciousness. But if BCIs enable direct sharing of conscious experience—one brain receiving the perceptual and cognitive states of another—we would develop new empirical knowledge about consciousness. The conjoined twins connected at the thalami, who can share conscious experience yet remain aware of the distinction between their minds, suggest that consciousness is not a singular, unified phenomenon but something more complex and distributed.
Regarding the risks, Hodak expresses cautious optimism. He assigns a probability of "doom" well below 50 percent. This isn't naive optimism but rather a grounded assessment that while challenges are real, the opportunities for positive outcomes appear to outweigh the downside risks. He notes that he's uncertain whether we'll "cure all diseases" by 2035—he finds that framing simplistic. But he does believe there will be fundamentally new options for addressing human suffering and expanding human capability. These new options will reframe our understanding of what it means to be human.
Intelligence itself is undergoing a transformation. Artificial intelligence is becoming widely accessible to anyone with the agency to implement it. This democratization of computational intelligence, combined with BCIs enabling more direct brain-computer integration, could enable individuals to amplify their own cognitive capabilities in ways previously impossible. A person with access to AI tools and a high-bandwidth BCI interface could potentially access capabilities that would have seemed superhuman just years earlier.
The transition to 2035 and beyond will also be shaped by how society and institutions adapt to these technologies. Medical regulatory bodies are beginning to engage with neural interfaces, but the frameworks are still evolving. The ethical questions—about fair access, enhancement versus treatment, equity, and consent—remain largely unresolved. These questions are as important as the technical ones and may ultimately determine whether the benefits of BCIs are broadly distributed or concentrated among the privileged.
What Hodak is certain of is that we are in a "takeoff era"—a period where something genuinely new is emerging on Earth. For most of human history and prehistory, even the past few centuries, the conditions of human life remained relatively constant. Then came the Industrial Revolution, which transformed everything. Within a few generations, human life became unrecognizable. The period from roughly 1750 to 1850 must have been disorienting for those living through it, watching fundamental aspects of existence—how people work, where they live, what's possible—shift continuously.
Hodak believes the next fifteen years will be similarly transformative, driven by convergent advances in AI, biotechnology, and neural interfaces. Just as someone from 1700 could not have imagined the world of 1850, people today struggle to genuinely envision what becomes possible when BCIs reach high bandwidth, when AI becomes ubiquitous, and when biology becomes increasingly engineerable.
The most exciting aspect, from his perspective, is that this is not speculative or dependent on theoretical breakthroughs. The fundamental science is increasingly settled. BCIs work; they restore function, they interface with brains in ways we can measure and improve. The question is no longer "is this possible?" but rather "how fast can we improve it and how broadly can we deploy it?"
Conclusion
The future of human capability and longevity is being built today through advances in brain-computer interfaces, neural engineering, and biotechnology. Max Hodak's work at Science—restoring vision through Prima, developing biohybrid neural interfaces, and creating portable perfusion systems—represents far more than incremental medical progress. These technologies embody a fundamental shift in how we approach human health and potential. Rather than pursuing chemistry-based interventions for each disease, we're engineering the brain itself and the interfaces connecting consciousness to external reality. The convergence of artificial intelligence and neuroscience is revealing that biological and artificial intelligence operate through similar principles, making the brain increasingly understandable and improvable. As we approach 2035, the trajectory is clear: the technologies being developed now will transform what it means to be human, enabling restoration of lost function for millions and eventually offering capabilities that healthy individuals today cannot imagine. The most important message is not that science fiction futures are coming, but that a genuine transformation in human medicine and capability is already underway.
Original source: How To Build The Future: Max Hodak
powered by osmu.app