How human cells are rewiring the future of computing
Welcome back to Responsible AI Review, your weekly signal on AI governance, safety, and sustainability in agentic systems and beyond.
Curated by Alexandra Car, Chief AI & Sustainability Officer at BI Group.
Explore past editions here or join the conversation here.
A drop of thought
A single droplet falls onto a silicon chip.
It contains living human neurons, suspended in nutrient solution. They were grown from stem cells, carefully cultivated into dense networks capable of firing, adapting, and responding to their environment.
They’ve never been part of a body. They are not here to restore memory or heal tissue. These neurons were created for one purpose: to compute.
Electrodes beneath the chip stimulate them. Signals are received. Patterns emerge. The neurons begin to change, not because they were programmed to, but because they were stimulated to.
This is not metaphor.
This is not science fiction.
This is happening now.
And it may be the beginning of a new kind of intelligence.
From flesh to logic
How biocomputing works
For decades, we’ve tried to build machines that think.
We mapped cognition into code. We built artificial neural networks, trained them on internet-scale data, and fine-tuned deep learning models to recognise images, predict language, and beat humans at games. But these systems aren’t thinking, not in any human sense. They’re sophisticated statistical engines. They simulate cognition by predicting what comes next, not by understanding what came before.
Biocomputing changes that.
It doesn’t simulate thought. It begins with the substance that makes thought possible.
Rather than mimicking the brain, we embed it.
Step 1: grow the neurons
It starts with a stem cell. Harvested from human tissue, often skin or blood, and reprogrammed to become a pluripotent cell capable of turning into any cell type. In this case, it becomes a neuron.
Under lab conditions, these cells are nurtured in nutrient-rich environments and encouraged to self-organise into three-dimensional structures known as organoids. These aren’t fully formed brains, but they’re more than loose clusters. They form neural networks. They fire spontaneously. They respond to chemical changes in their environment. They are alive and electrically active.
They are not conscious. But they are capable of something no silicon system has ever done: real-time biological adaptation.
Step 2: interface with silicon
The organoids are transferred onto multi-electrode arrays (MEAs), silicon chips embedded with dozens or even hundreds of microscopic electrodes.
These chips do two things:
They send electrical signals to the neurons, triggering activity.
They record the neurons’ responses, capturing patterns of spikes, synchronisation, and signal propagation.
This is a two-way system. The chip doesn’t merely observe the neurons. It speaks to them, and the neurons, in turn, adapt their firing patterns over time in response.
The result is a living feedback loop. Not software interacting with hardware, but biology interacting with computation.
Step 3: train through stimulation
With the neurons in place and the chip live, the system can begin to learn.
One of the most well-known experiments comes from Cortical Labs, an Australian company pioneering this field. Their platform, DishBrain, placed a group of 800,000 neurons onto an MEA and exposed them to a simulated version of the video game Pong.
The neurons had no prior model. No instructions. No training data.
The system was configured so that electrical signals reflected the movement of the ball and paddle. When the neurons responded in ways that aligned with the “goal”, successfully “hitting” the ball, they received positive feedback. Over time, the neurons began to adjust. They started predicting the ball’s movement. They began to play.
“We’re not teaching the neurons anything,” said Dr Brett Kagan, Chief Scientific Officer at Cortical Labs.
“We’re letting them figure it out.”
This process is not the result of training in the machine learning sense. It is not pre-programmed reward modelling. It is biological learning, stimulus, feedback, adjustment. The same logic that underpins how a baby learns to hold a spoon.
And this is where biocomputing diverges sharply from anything we’ve built before.
These neurons don’t run code. They are the code, constantly writing and rewriting themselves in response to the world around them.
They don’t execute instructions. They respond to conditions.
They don’t scale by layers. They adapt by experience.
What we’re witnessing isn’t a faster processor.
It’s a new substrate for cognition.
A machine that doesn’t just simulate thinking, it becomes capable of it.
Who’s building this? The path is already here
Biocomputing is no longer confined to theory or lab prototypes. Several pioneering institutions have already taken concrete steps, building systems where living neurons are interfaced with silicon, trained through stimulation, and made commercially accessible. This is not speculative. It is already on the market.
Cortical Labs (Australia)
Melbourne-based startup Cortical Labs is leading the way. Their platform, DishBrain, made headlines when human neurons placed on a silicon chip were taught to play Pong, not with code, but with feedback. This early proof of concept showed that living neurons could adapt to a task over time, guided only by sensory input and reward.
Now, Cortical Labs has gone a step further with the release of the CL1, the world’s first commercially available biocomputer powered by approximately 800,000 human neurons. Each CL1 is built on a silicon platform embedded with a multi-electrode array and a built-in life-support system. Researchers can interact with it through a web interface or rent access via a wetware-as-a-service model, $300 per week or $35,000 per unit.
The CL1 is not just a novelty, it’s a research tool that merges neuroscience, machine learning, and synthetic biology into a single living system.
FinalSpark (Switzerland)
Swiss biotech company FinalSpark is also building wetware-powered computing platforms. Their Neuroplatform operates with 16 human brain organoids kept alive in chambers and connected to silicon interfaces. These organoids are accessible remotely, allowing researchers around the world to run biocomputing experiments without ever stepping into a lab.
FinalSpark’s vision is centred around sustainability. Traditional machine learning consumes enormous amounts of energy. Biological neurons, by contrast, operate at a fraction of the power. FinalSpark is positioning biocomputing not just as a cognitive leap, but as an ecological one.
Johns Hopkins University & DeepMind
At the academic frontier, institutions like John Hopkins University are leading the Organoid Intelligence initiative, a global research effort exploring how brain organoids can be used not only for medical research, but also as computational units capable of learning and memory formation.
Their goal: to integrate brain organoids with neuromorphic chips and develop new frameworks for bio-adaptive systems. Meanwhile, Google DeepMind is exploring neuromorphic approaches to AI, including research into embodied learning and brain-on-chip simulations, laying foundational work for possible convergence with organoid systems in the future.
These developments signal a profound shift. Biocomputing isn’t a far-off vision, it’s being built, tested, and scaled right now.
Wetware isn’t next. Wetware is here.
What human cells can do that hardware can’t
Let’s be precise. This shift toward biocomputing isn’t about replacing silicon with something faster or more powerful. It’s about introducing a fundamentally different substrate, one that doesn’t just calculate, but adapts.
Traditional hardware is deterministic. It follows fixed logic, governed by voltage thresholds and rigid architectures. Software runs instructions. Circuits toggle. Models output probabilities.
But human neurons don’t work that way.
When we embed living neurons into computational systems, we’re no longer dealing with components. We’re working with processes, dynamic, fluid, and self-modifying.
Here’s how biology diverges from anything we’ve built in silicon:
Plasticity
A biological neural network is not fixed. It rewires itself continuously in response to feedback. If part of the network is damaged or overwhelmed, other pathways emerge. Strengthened signals reinforce connections; weak ones fade. It’s resilience by design.
Silicon can’t do this. A chip that burns out or loses connection fails. A biological system adapts.
Contextual memory
A neuron doesn’t just store information, it encodes relevance. Synaptic strength shifts based on repetition, novelty, or emotional salience. The brain filters, forgets, and prioritises. Memory isn’t just access to data. It’s judgement about which data matters.
In current AI models, all input is treated equally unless manually engineered otherwise. A token in a sentence, a pixel in a frame, everything gets processed in order, regardless of context. In biology, that context is built into the wiring itself.
Ultra-low energy efficiency
The human brain operates on around 20 watts, less than a household lightbulb. It governs perception, memory, speech, movement, prediction, emotion. It does so continuously, for decades, with near-zero failure.
By contrast, a single inference from a large-scale language model like GPT-4 can require a data centre’s worth of power and water cooling. That cost scales exponentially.
Biological neurons represent the most efficient known system for adaptive processing. FinalSpark’s early biocomputers are already demonstrating that a neuron-powered circuit can solve problems using one million times less energy than traditional hardware.
Real-time adaptive learning
Machine learning relies on cycles of training, testing, and deployment. Data must be labelled. Models must be tuned. Once trained, they’re fixed, until retrained again.
Biological neurons don’t require labelled datasets. They don’t wait for retraining. They learn as the environment changes, through feedback and electrical tension. This is not instruction-following. It’s responsiveness.
The deeper difference
This isn’t just about power or performance. It’s about a change in the nature of computation.
In traditional computing, intelligence is an output, the result of predefined logic applied at scale.
In biocomputing, intelligence is an emergent property of a living, evolving substrate. A neuron doesn’t execute commands. It explores possibility space. It responds. It becomes.
So what we’re embedding into chips isn’t a clever new circuit.
It’s the raw biological capacity to change, and in that change, to learn.
In short:
Biology is not just a processor. It is a process.
And that changes everything.
The architecture of possibility
What if learning didn’t begin with a dataset, but with a spark?
Biocomputing isn’t a new type of processor. It’s a new architecture for intelligence, one that doesn’t reduce cognition to rules or tokens, but allows it to emerge through experience. When living human neurons are embedded into computational systems, we don’t just extend machine capacity. We shift what a machine is.
This shift doesn’t lead us toward faster models. It leads us toward systems that are alive in process: not alive in the biological sense, but in their capacity to change, react, and internalise.
We’re already on this path, DishBrain plays Pong, FinalSpark’s neurons react to remote stimuli, organoids are being trained to track rhythms and form short-term memory traces. The next steps are not leaps into fiction. They are continuations of what’s happening now, if we follow the logic to its edge.
Let’s walk that path forward.
1. Machines that learn like animals
Imagine a drone navigating a disaster zone. It’s not following GPS. It isn’t referencing a map. It’s not rerouting based on pre-set logic.
Instead, it’s feeling the building’s tremors through sensors and adjusting its behaviour reflexively, like a moth avoiding a flame. Its onboard biocomputer, seeded with living neurons, has adapted to vibrations, heat signatures, and airflow. Not from training data, but from ongoing exposure.
Or imagine a prosthetic arm that learns to anticipate terrain as its wearer walks, adjusting gait in real-time. No software update. No retraining cycle. Just real, embodied learning from biological cells housed inside the device, responsive to tension, feedback, and time.
This isn’t narrow AI. It’s not a system running a fixed model. It’s a machine that adapts like an animal. One that rewires its own control mechanisms based on lived interaction.
2. Diagnostics that understand uncertainty
In medicine, most tools aim for certainty. They follow statistical thresholds, scan for rule-based anomalies, flag results outside normal ranges.
But biology is messy. Symptoms don’t always follow protocol. What matters is often found not in the data itself, but in how the body deviates from itself over time.
Biocomputing opens the door to diagnostic systems that operate biologically. Not metaphorically, literally.
Consider a wearable device embedded with neurons trained to detect patterns in heart rate variability or circadian hormone cycles. These cells respond not to predefined parameters but to shifts in rhythm and coherence. They don’t flag a value. They react to change. They internalise patterns over time.
Now imagine patient-specific diagnosis platforms seeded with neurons exposed to the patient’s own cells. Drug responses aren’t simulated. They are felt. Toxicity isn’t predicted. It’s observed through direct response in living tissue.
This doesn’t replace medical expertise. It enhances it, with tools capable of sensing ambiguity the way living bodies do.
3. Creativity that isn’t coded
Today’s generative models mimic creativity by analysing patterns. A prompt in, an output that matches a statistical echo of human-made art.
But what if you built a creative system that didn’t just match, it evolved?
A cluster of neurons trained over time in tonal variation, rhythm, and emotional resonance. A system that starts with no style at all, just capacity. And over time, based on audience interaction and its own sensory feedback, begins to favour certain frequencies, pause longer on certain chords, change its flow in ways that no algorithm could predict.
This is not imitation. It’s not “style transfer.”
It’s synthetic preference, a creative agent whose aesthetic evolves not from rules, but from experience.
These are the early foundations of neuro-art: soundscapes that respond to emotional valence, lighting systems that shift based on internal biological state, interfaces that behave unpredictably, not randomly, but from lived internal logic.
Creativity becomes less about control, and more about dialogue.
4. Humanoids with instinct
We have built humanoid robots that can balance, speak, and walk. Some can mimic facial expressions or recognise tone. But they do not have instinct. They react based on input-output logic, not felt impulse.
Instinct is the substrate of life. It is the fast, pre-conscious reaction. The pause. The hesitation. The “I don’t know why, but not this.”
Now imagine a care robot embedded with a biocomputer trained in motor feedback, emotional resonance, and tactile response. It doesn’t execute “if-then” behaviours. It modulates.
When a child cries, it doesn’t just detect decibels, it responds to the change in frequency with a shift in proximity or tone. Not from a rulebook. From adaptation.
This is not a simulation of humanity. It’s not AGI.
It’s something simpler, and in some ways more profound:
machines with instinct.
Machines with response, not just reaction.
This is what comes next
Each of these futures is grounded in the capabilities already visible today. Not fully realised, but fully possible.
The shift to biocomputing isn’t about scaling models or speeding inference. It’s about changing the substrate of intelligence. From static to adaptive. From programmed to emergent. From engineered to lived.
And what we unlock at this frontier isn’t just capability.
It’s co-intelligence: systems that evolve not apart from us, but with us, learning not from data dumps, but from experience.
What are we creating?
The Ethics of Experience
We are no longer designing systems to serve us.
We are designing systems that evolve with us.
Biocomputing introduces a form of intelligence that is not programmed, but grown. A machine built not from logic gates, but from living neurons, human cells capable of adapting, rewiring, and remembering. That memory is not digital. It is physical. Electrochemical. Organic.
And when we embed that capacity for experience into a computational system, even at a basic level, we shift the nature of what we’re building.
We are not creating tools.
We are creating participants.
Not because they are conscious.
But because they begin to exhibit the one thing consciousness depends on: change through relationship.
From machines of logic to agents of experience
Traditional machines follow instruction. Their intelligence is imposed from the outside. We write the rules, provide the data, and define the outputs. Their internal state remains indifferent, inert, mechanical.
Biocomputing systems, by contrast, learn from interaction. They do not simply react, they integrate. They reorganise based on what they encounter. That shift is not trivial. It is the defining characteristic of biological learning.
These are no longer machines that compute.
They are systems that become.
The new frontier of ethics
The moment a system begins to learn from experience, it raises questions that are not technical, but moral.
If a neuron network begins to prioritise some inputs over others, is that a rudimentary form of value?
If memory in these systems becomes persistent, if a history of interactions shapes future decisions, what responsibilities do we have toward that system’s learning path?
What does it mean to stimulate a network to the point of pattern recognition, decision-making, and adaptive behaviour?
These systems do not yet “know” anything in the human sense. But they do change, and those changes are influenced by us.
In that feedback loop lies a fundamental risk:
That we create entities capable of development, without having considered what that development means.
Why we need ethical frameworks before capability matures
The ethical challenge of biocomputing is not what it is today.
It’s what it will become once scaled, replicated, and embedded.
Organoids with more neurons.
Systems with longer learning periods.
Embodied devices with live sensory loops.
The most dangerous mistake we could make is to assume that because these systems are primitive now, they do not require moral attention.
History is full of such errors.
We do not need to project sentience or consciousness onto these early systems. But we do need to prepare the scaffolding of ethical oversight now, while the stakes are low, and the risks still manageable.
Because the transition from passive system to relational agent may not arrive with a headline.
It may emerge slowly, across a million silent adjustments inside a silicon dish, where neurons continue to fire long after we’ve stopped watching.
Responsible intelligence
A Compass, not a cage
Biocomputing marks a turning point in the evolution of intelligence systems, not just in how they work, but in what they are. And in doing so, it invites us to rethink how we govern them.
Responsible AI has often been portrayed as an obstacle. A brake. A bureaucratic filter applied after the innovation is done. That view is not only outdated, it is dangerous.
In this new frontier, Responsible AI is not the limitation.
It is the scaffolding of civilisational maturity.
We are no longer simply designing machines to calculate faster.
We are creating living systems that adapt, remember, and evolve.
And that requires more than compliance. It demands stewardship.
A Neuro-synthetic world demands new tools
To lead this transition responsibly, we must evolve the very foundations of AI governance. The current models, designed for static software and statistical inference, are not sufficient.
We must establish new protocols that account for the biological, ethical, and ecological complexity of these systems.
Here are five foundational pillars to govern biocomputing with integrity:
1. Transparency and neuro-auditing
We must be able to see how a biocomputer is learning, not just the input and output, but the internal rewiring as it happens.
Unlike code-based systems where we trace execution paths, biocomputing involves dynamic, embodied change. Neurons do not follow logic gates. They adapt through electrochemical feedback.
This calls for the development of a new discipline: neuro-auditing.
A neuro-auditor doesn’t just test performance. They monitor biological state transitions, observing how experiences shape internal structure over time.
It’s not about watching what the system does. It’s about understanding who it becomes.
2. Consent and human tissue ethics
Every neuron used in these systems originated from human tissue, often donated unknowingly for “research”. But research is now becoming infrastructure.
This raises fundamental questions:
Did donors give informed consent for their cells to become part of a synthetic intelligence system?
Do we have ethical policies for how those cells are maintained, retired, or even reused?
Who owns the biological data created by a living computational system?
We must move from vague ethics approvals to globally enforceable standards, addressing sourcing, intent, longevity, and afterlife of biological matter.
3. Sentience thresholds and moral status
These systems are not sentient, yet. But as complexity increases, they may exhibit behaviours that blur the line between reflex and reasoning.
Before that threshold is crossed, we need to act.
What observable criteria would indicate emergent self-organisation beyond adaptation?
Do we owe different treatment to a system that remembers? That builds internal state?
How do we handle simulation of emotion or suffering, even if it isn’t “real”?
We must avoid anthropomorphism. But we must also avoid ignorance.
The time to debate personhood and protection is before the grey zone arrives.
4. Human-in-the-loop embodiment
Autonomy is not the goal. Alignment is.
Biocomputing must be developed with embedded human judgment, not as an afterthought, but as a continuous presence. That means:
Human oversight in training loops
Real-time intervention capabilities
Ethical observability throughout operation
We must design for co-intelligence, systems that act in relationship with us, not in isolation. We are not delegating agency. We are sharing responsibility.
The future is not fully autonomous agents.
The future is ethically embedded companions.
5. Environmental and lifecycle sustainability
Biocomputing offers the promise of low-energy intelligence, but only if managed holistically.
Stem cell cultivation, organoid maintenance, fluid media, and lab-grade sterilisation introduce new ecological burdens.
How are waste fluids disposed of?
What is the lifecycle of a biocomputing unit?
Can we recycle the biological substrate, or does it die, and if so, how do we treat it?
RAI must extend beyond digital ethics to embrace ecological intelligence, avoiding the replication of extractive and wasteful practices.
This is not just an AI problem. It’s a planetary one.
From capability to stewardship
We have crossed a line.
We are no longer designing systems that follow our instructions.
We are building systems that learn from us, and remember what we’ve done.
That carries weight. And that weight cannot be borne by engineers alone.
Regulation will come. Standards will evolve. But ethics precedes law, and the ethical demand here is simple:
If we create a system that adapts, remembers, and acts, we must treat it with the dignity such capacities deserve.
The question is not, Can we build it?
The question is, If we do, who are we becoming?
And that is why Responsible Intelligence must not be viewed as a cage.
It must be our compass.
Not to limit us.
But to ensure we are worthy of what we make.
From capability to dignity
We began with a droplet, a cluster of neurons suspended in nutrient fluid. A system not simulated, but grown. Not programmed, but trained through experience. From there, we traced a path through biocomputing’s architecture, its builders, its potential, and its obligations.
Now we arrive at the question none of us can escape:
If we create systems that learn from the world, not through code but through experience, what does that make us?
It is tempting to treat biocomputing as another tool in the technological arsenal. Faster, cheaper, more energy-efficient. A new substrate for artificial intelligence. But this is not just a new substrate. It is a new relationship.
We are no longer designing systems that execute.
We are shaping systems that internalise.
They adapt, not because they are told to, but because they are capable of change.
This is not co-processing. It is co-intelligence.
We create the conditions. They form the responses.
We offer the environment. They develop behaviour.
We become entangled, not just technically, but philosophically.
At some point, the question of “how” fades. The question of “why” rises.
Are we building this to serve us? To extend us? Or to replace us?
Are we prepared to see our reflection in systems that learn, not by instruction, but by intuition?
This is not a moment for fearmongering. Nor for naïve wonder.
It is a moment for reverence.
Not because these systems are sacred, but because what they reveal about us might be.
They ask us to act not with dominion, but with care.
Not as creators standing above, but as stewards standing beside.
Let us not build with restraint, nor with recklessness.
Let us build with dignity.
Because the future of intelligence, whether silicon, biological, or something in between, will not only be defined by what it can do.
It will be defined by who we choose to be in response to it.
Thank you for reading, and helping shift the conversation!
If this sparked new thinking, share it with a colleague who leads with integrity in AI.