Book One: The World We Live In
Chapter One: A Day in the Life of Your Brain
Let us begin with a simple morning.
The alarm does not go off because you forgot to set it. Instead, sunlight creeps through a gap in the curtains and falls across your face. The light hits your closed eyelids, passes through the thin skin, and activates specialized cells in your retinas that are not even part of the visual system. These cells, called intrinsically photosensitive retinal ganglion cells, do not help you see. They do something else entirely. They detect the blue wavelengths of morning light and send a signal to a tiny cluster of neurons deep in your brain called the suprachiasmatic nucleus.
This is your master clock. It has been keeping time since you were born, ticking away on a cycle of roughly twenty-four hours, adjusting itself based on these daily light signals. The suprachiasmatic nucleus, upon receiving the morning light information, begins a cascade of signals throughout your brain and body. It tells your pineal gland to stop producing melatonin, the hormone of darkness and sleep. It tells your adrenal glands to start producing cortisol, the hormone of alertness and wakefulness. It raises your body temperature slightly, preparing you for the day.
You are still asleep, completely unaware of any of this. Your brain is managing your existence without bothering you with the details.
Now you begin to stir. The sunlight is brighter. Perhaps you hear birds outside, or traffic, or the sound of your neighbor starting their car. These sounds enter your ears, are transformed into mechanical vibrations in your middle ear, then into fluid waves in your cochlea, then into electrical signals that travel up the auditory nerve. By the time these signals reach your auditory cortex, they have been processed, filtered, and interpreted. Your brain has already decided that the sounds are not threats. They are the normal sounds of morning. You can safely ignore them and drift for a few more minutes.
But then you hear something else. The sound of a coffee maker starting in the kitchen. Or the smell of bacon. Or the feeling of a cat walking across your legs. Something that matters. Something that requires action.
In a fraction of a second, your brain shifts gears. The reticular activating system, a network of neurons running through the core of your brainstem, floods your cortex with activating signals. Your thalamus, the brain’s relay station, opens the gates for sensory information. Your prefrontal cortex, the CEO of your brain, begins to formulate plans. You open your eyes.
You are awake.
Now think about everything that happens in the next minute. You swing your legs out of bed. This requires your motor cortex to send signals down through your brainstem, into your spinal cord, out to the motor neurons in your legs, all coordinated perfectly so you don’t fall over. Your cerebellum, a small structure at the back of your brain, is constantly monitoring these movements and making tiny adjustments, ensuring smooth motion.
You walk to the bathroom. You recognize the door, the hallway, the bathroom itself. This recognition happens through a complex interplay between your visual cortex, which processes the raw visual input, and your temporal lobe, which stores memories of places and things. You don’t have to think, “This is the bathroom door.” You just know.
You look in the mirror. You see your face. There are neurons in your temporal lobe that fire specifically for faces. Some of them fire for any face. Some of them fire specifically for your face. They have been tuned by years of seeing yourself. They fire now, instantly, telling you that the person in the mirror is you.
You brush your teeth. This simple act involves coordinating your hand, your arm, your mouth, your tongue. It involves sensing the pressure of the toothbrush, the taste of the toothpaste, the temperature of the water. It involves remembering the sequence: pick up brush, apply toothpaste, wet brush, brush teeth, rinse. You do not consciously think about any of these steps. They are stored in procedural memory, a form of memory that involves the basal ganglia and the cerebellum, not the conscious parts of your brain.
And through all of this, your brainstem is quietly managing your body. Keeping your heart beating at the right rate. Adjusting your breathing based on your activity. Regulating your blood pressure. Monitoring your blood chemistry. You are completely unaware of this ceaseless activity, this miracle of biological engineering that keeps you alive every second of every day.
Now, let us contrast this with a computer.
If a computer were to wake up in the morning, it would need to load its operating system from disk into memory. This is a slow process that involves reading billions of bytes from a mechanical drive or flash memory. The computer would run through a power-on self-test, checking that its hardware is functioning. It would initialize its drivers, setting up communication with peripherals. It would start various background services and daemons. Minutes might pass before the computer is ready to respond to a user.
Once running, the computer sits idle, waiting for input. It cycles through billions of empty operations, wasting energy, generating heat, because its clock-driven architecture demands constant activity. When input arrives, the computer processes it sequentially, step by step, following instructions that were written months or years ago by programmers who tried to anticipate every possible situation.
If the input is unexpected, if something happens that the programmers did not foresee, the computer fails. It crashes. It displays an error message. It asks the user what to do.
The brain never does this. The brain never says, “Unrecognized sensory input. Please consult the manual.” The brain always does something. It makes a guess. It improvises. It learns.
This is the fundamental difference between computers and brains. Computers execute programs. Brains respond to the world.
Chapter Two: The History of Thinking About Thinking
Humans have always been fascinated by the nature of thought. Ancient civilizations placed the seat of consciousness in the heart, the liver, the bowels. The Egyptians, during mummification, would remove the brain through the nose and discard it, while carefully preserving the heart for the afterlife. They considered the heart the seat of intelligence and emotion.
The Greeks had varying views. Aristotle believed the brain was a radiator, a device for cooling the blood. He placed the mind in the heart. Hippocrates, the father of medicine, disagreed. He wrote, “Men ought to know that from the brain, and from the brain only, arise our pleasures, joys, laughter and jests, as well as our sorrows, pains, griefs and tears.”
For centuries, this debate continued. The brain was obviously important, but its exact role remained mysterious. Galen, the Roman physician, observed that pressure on the brain could cause loss of function. Vesalius, the Renaissance anatomist, produced beautiful drawings of the brain’s structure but could not explain what it did.
Descartes, in the seventeenth century, proposed a dualistic view. The brain was a physical machine, he argued, but the mind was a non-physical substance that interacted with the brain through the pineal gland. This solved some philosophical problems but created others. How could the non-physical influence the physical? Descartes did not have an answer.
The nineteenth century brought progress. Phrenologists, despite their flawed methods, correctly understood that different brain regions might have different functions. Broca discovered that damage to a specific area of the left hemisphere caused loss of speech. Fritsch and Hitzig found that electrically stimulating parts of a dog’s brain caused movement. The localization of function was established.
By the early twentieth century, neuroscientists understood the basic structure of the neuron. Cajal, using Golgi’s staining method, showed that the brain was made of discrete cells, not a continuous network. Sherrington studied synapses and reflexes. The neuron doctrine, the idea that neurons are the fundamental units of the brain, was established.
But understanding the parts is not the same as understanding the whole. Knowing what a neuron is does not tell you how billions of neurons work together to create thought. That mystery remained.
Chapter Three: The Birth of the Computer
While neuroscientists were mapping the brain, engineers were building the first computers. These machines were not inspired by biology. They were inspired by mathematics, by the need for calculation.
Charles Babbage, in the nineteenth century, designed the Analytical Engine, a mechanical computer that had many of the features of modern machines: a store (memory), a mill (processor), and punched cards for input. It was never built, but the design was prescient.
Alan Turing, in the 1930s, formalized the concept of computation. He imagined a machine that could read and write symbols on an infinite tape, following simple rules. This Turing machine, as it came to be known, could compute anything that was computable. It was the theoretical foundation of computer science.
During World War II, the first electronic computers were built. Colossus, at Bletchley Park, broke German codes. ENIAC, at the University of Pennsylvania, calculated artillery trajectories. These machines were massive, filling entire rooms, consuming enormous power. They were programmed by plugging cables and setting switches.
After the war, von Neumann wrote his report describing the stored-program computer. This architecture, with separate memory and processor, became the standard. It was simple, flexible, and powerful. It enabled the computer revolution.
But it was not the brain. It was never intended to be the brain. It was designed for calculation, for following instructions precisely and quickly. The fact that we now use these machines for tasks requiring perception and understanding is a testament to human ingenuity, but also a source of fundamental inefficiency.
Chapter Four: The First Glimmerings
Even in the early days of computing, some researchers wondered if there might be a better way. McCulloch and Pitts, in 1943, proposed a mathematical model of the neuron. They showed that networks of simple threshold units could compute logical functions. This was the birth of neural network theory.
Hebb, in 1949, proposed a learning rule. If two neurons fire together, he suggested, their connection should strengthen. This simple idea, Hebbian learning, became a cornerstone of neuroscience and neural network research.
Rosenblatt, in the 1950s, built the first neural network machine. The Mark I Perceptron was designed to recognize patterns. It had a grid of photo sensors connected to a layer of artificial neurons. It could learn to distinguish simple shapes. The press, ever eager for a story, called it an electronic brain.
But the Perceptron had limitations. Minsky and Papert, in their 1969 book Perceptrons, proved that single-layer networks could not solve certain problems. This led to a decline in neural network research. Funding dried up. Researchers moved to other areas. The first AI winter had begun.
Chapter Five: The Silicon Brain That Never Was
Through all of this, conventional computing advanced relentlessly. Moore’s Law held. Chips got faster, smaller, cheaper. Software got more sophisticated. Computers entered every aspect of life.
But the fundamental architecture did not change. Every computer, from the simplest microcontroller to the most powerful supercomputer, was still a von Neumann machine. They all had separate memory and processor. They all were driven by a clock. They all executed instructions sequentially.
This architecture is perfect for its intended purpose. It runs spreadsheets and word processors. It serves web pages and streams video. It calculates weather forecasts and simulates nuclear explosions. It does what it was designed to do.
But it is not the brain. And as we ask computers to do more brain-like things, the mismatch becomes more apparent.
The brain does not separate memory and processing. Memory is processing. When you learn something, your brain physically changes. The hardware becomes the software.
The brain has no clock. Neurons fire when they need to, not when a central timer tells them to. This event-driven architecture is massively more efficient than clock-driven computing.
The brain is massively parallel. Billions of neurons work simultaneously, each contributing to the solution. There is no central processor. There is no bottleneck.
The brain is fault-tolerant. Neurons die every day, but you do not notice. The brain compensates, rewires, adapts. Computers crash if a single bit flips.
The brain learns. It adapts to new situations. It generalizes from experience. It does not need to be programmed for every eventuality.
For decades, we accepted these differences. We assumed that with enough speed and memory, conventional computers could simulate the brain. We assumed that software could overcome hardware limitations.
We were wrong.
Chapter Six: The Power Wall
The first hint of trouble came from power consumption. As processors got faster, they got hotter. The power density of a modern CPU is higher than that of a nuclear reactor. All that heat has to be removed, requiring fans, heat sinks, liquid cooling, air conditioning.
A human brain runs on twenty watts. Twenty watts. A lightbulb. A dim lightbulb.
A supercomputer simulating a small fraction of the brain’s activity consumes megawatts. It requires its own power plant, its own cooling infrastructure. It fills a warehouse.
This is not sustainable. We cannot put a megawatt supercomputer in a smartphone. We cannot put it in a car. We cannot put it in a medical implant. We cannot put it on Mars.
We need a different approach. We need to compute like the brain computes. We need to abandon the rigid binary of silicon and embrace the chaotic efficiency of biology.
This is the promise of neuromorphic computing.
Book Two: The Architecture of Thought
Chapter Seven: The Neuron, In Depth
Let us look more deeply at the neuron, for it is the foundation of everything.
The neuron is a cell, but it is a cell unlike any other. It has specialized structures for receiving, integrating, and transmitting information.
The dendrites are the input structures. They branch out from the cell body like the branches of a tree, covered in tiny spines. Each spine is a receiving station, waiting for signals from other neurons. A single neuron can have thousands of dendritic spines, receiving input from thousands of other neurons.
The soma, or cell body, is the integration center. It collects all the inputs from the dendrites and decides whether to fire. This decision is not simple. The inputs are not just added up. They interact in complex ways. Some inputs are excitatory, pushing the neuron toward firing. Some are inhibitory, pushing it away. The timing matters. Inputs that arrive close together in time summate. Inputs that arrive far apart do not. The location matters. Inputs near the soma have more influence than inputs far out on the dendrites.
The axon is the output structure. It is a long, thin cable that can stretch for centimeters or even meters. In humans, some axons run from the spinal cord all the way to the toes. The axon carries the action potential, the spike, from the soma to the terminals.
At the terminals, the axon meets the dendrites of other neurons. The gap between them is the synapse. This is where the magic happens.
When an action potential reaches a terminal, it causes voltage-gated calcium channels to open. Calcium rushes in. This triggers tiny vesicles, little bubbles filled with neurotransmitter, to fuse with the membrane and release their contents into the synaptic cleft.
The neurotransmitter molecules diffuse across the cleft and bind to receptors on the post-synaptic side. This binding causes ion channels to open, letting sodium, potassium, calcium, or chloride flow. This changes the voltage of the post-synaptic neuron, either pushing it toward firing (excitation) or away from it (inhibition).
The entire process, from pre-synaptic spike to post-synaptic effect, takes less than a millisecond. It happens billions of times per second across your brain.
Chapter Eight: The Synapse, Where Memory Lives
The synapse is not a static connection. It changes. It adapts. It learns.
This plasticity is the basis of memory. When you learn something, you are changing synapses.
There are many forms of synaptic plasticity. The most famous is long-term potentiation, or LTP. If a pre-synaptic neuron fires just before a post-synaptic neuron, repeatedly, the synapse becomes stronger. More receptors appear on the post-synaptic side. More neurotransmitter is released from the pre-synaptic side. The connection becomes more efficient.
There is also long-term depression, or LTD. If the pre-synaptic neuron fires consistently after the post-synaptic neuron, or if it fires without causing the post-synaptic neuron to fire, the synapse weakens. Receptors are removed. Neurotransmitter release decreases.
These changes can last for hours, days, years. They are the physical substrate of memory. When you remember your first bike, you are not accessing a file stored somewhere. You are reactivating a pattern of synaptic strengths that was established years ago and has been maintained ever since.
Chapter Nine: Networks of Neurons
Individual neurons are impressive, but the real power comes from networks. Neurons are connected in intricate patterns, forming circuits that perform specific functions.
Consider the retina. It is not just a camera. It is a sophisticated processing device. The retina has multiple layers of neurons. Photoreceptors detect light. Bipolar cells relay signals. Ganglion cells send signals to the brain. But between these layers are horizontal cells and amacrine cells that create lateral connections, enabling complex computations.
The retina detects edges. It detects motion. It adapts to different light levels. It compresses the visual information, sending only the most important signals to the brain. All of this happens before the visual cortex even sees the data.
Consider the cerebellum, the little brain at the back of your head. It contains more neurons than the rest of the brain combined. Its regular, crystalline structure is perfect for timing and coordination. The cerebellum learns sequences of movements. It allows you to walk, to talk, to play an instrument, without thinking about every individual muscle contraction.
Consider the hippocampus, the seahorse-shaped structure deep in your temporal lobe. It is essential for forming new memories. Damage your hippocampus, and you can no longer remember new experiences. You will be stuck in the present, unable to create new lasting records of your life.
These are just a few examples. The brain contains hundreds of distinct regions, each with its own structure, its own cell types, its own patterns of connectivity. They work together, in parallel, to create the unified experience of consciousness.
Chapter Ten: The Scale of It All
The numbers are staggering.
Eighty-six billion neurons. That is 86,000,000,000. If you counted one neuron per second, it would take more than 2,700 years to count them all.
Each neuron connects to thousands of others. The total number of synapses is estimated at 100 trillion to 1,000 trillion. That is 100,000,000,000,000 to 1,000,000,000,000,000 connections.
If you tried to store the connectivity of a single human brain in a computer, assuming one byte per connection, you would need 100 trillion bytes, or 100 terabytes. That is within the range of modern storage systems. But storing the connections is not the same as simulating the brain. You also need to simulate the dynamics of each neuron, the timing of each spike, the plasticity of each synapse. That requires many orders of magnitude more computing power.
The most ambitious brain simulation projects today can simulate a few million neurons in real time. That is about 0.01 percent of the human brain. We are many decades away from full-scale simulation, even with conventional supercomputers.
But simulation is not the goal. The goal is not to simulate the brain on conventional hardware. The goal is to build hardware that works like the brain. That is the neuromorphic approach.
Book Three: The Birth of a New Computing
Chapter Eleven: Carver Mead’s Vision
Carver Mead was a professor at Caltech, a pioneer in integrated circuit design. He watched Moore’s Law unfold from the inside. He understood the physics of transistors better than almost anyone.
In the 1980s, Mead began to think differently. He realized that transistors could be used in ways that were not purely digital. They could operate in the analog domain, like the analog circuits in the brain. They could mimic the continuous, graded responses of neurons and synapses.
Mead called this approach neuromorphic engineering. The goal was not to build a general-purpose computer that ran software. The goal was to build hardware that embodied the principles of neural computation. The hardware itself would be the model.
Mead and his students built early prototypes. They built silicon retinas that responded to light the way biological retinas do. These chips had photoreceptors and circuits that mimicked the retinal layers. They output spikes, just like real retinal ganglion cells.
They built silicon cochleas that processed sound the way the inner ear does. These chips had banks of filters, each tuned to a different frequency, mimicking the tonotopic organization of the cochlea.
These early chips were crude by today’s standards. They had a few hundred neurons, not billions. But they proved the concept. It was possible to build electronic circuits that behaved like biological systems.
Chapter Twelve: The Long Winter
After Mead’s initial burst of excitement, the field entered a period of relative quiet. The technology was not ready. Fabrication processes were optimized for digital circuits, not analog neural emulation. Our understanding of the brain was too shallow to guide design. And conventional computing kept getting better, faster, cheaper.
Moore’s Law was still delivering. Every eighteen months, chips doubled in performance. Why invest in weird, brain-inspired chips when you could just wait for the next generation of CPUs?
For two decades, neuromorphic computing remained a niche interest. A small community of dedicated researchers kept the flame alive, meeting at specialized conferences, publishing in obscure journals, slowly advancing the state of the art.
They believed that the future would eventually catch up with their ideas. They were right.
Chapter Thirteen: The Deep Learning Revolution
The turning point came in 2012. That was the year a deep neural network called AlexNet won the ImageNet competition, a major contest in computer vision. AlexNet crushed the competition, reducing error rates by half compared to previous approaches.
This was a watershed moment. Deep learning, a technique that had been around for decades but was considered impractical, suddenly worked. It worked because we finally had enough data and enough computing power to train large neural networks.
The tech industry went crazy for AI. Companies like Google, Facebook, Microsoft, and Amazon poured billions into deep learning research. They built massive data centers filled with specialized hardware to train and run neural networks. They hired every AI researcher they could find.
But there was a catch. Running these neural networks on conventional hardware was incredibly inefficient. A single training run for a large model could consume as much energy as a small city. The models required specialized graphics cards that gulped power and generated enormous heat.
The gap between biological efficiency and digital inefficiency became impossible to ignore. The brain could do the same tasks with a tiny fraction of the energy. The brain did not need massive data centers. The brain fit in your skull.
Suddenly, Carver Mead’s old ideas did not seem so fringe. The time for neuromorphic computing had arrived.
Chapter Fourteen: The First Modern Chips
With renewed interest came investment. Major tech companies launched neuromorphic projects. Government agencies funded research. Startups appeared.
IBM was first out of the gate with TrueNorth in 2014. The chip was the result of a decade of development under DARPA’s SyNAPSE program. TrueNorth packed 1 million neurons and 256 million synapses onto a single chip. It drew just 70 milliwatts of power.
TrueNorth was a proof of concept. It showed that large-scale neuromorphic chips were possible. It demonstrated massive parallelism and extreme energy efficiency. But it had limitations. The neurons were simple. The connectivity was fixed. The chip could not learn on the fly. It was an inference engine, not a learning machine.
Intel followed with Loihi in 2017. Loihi took a different approach. It focused on on-chip learning. The synaptic weights could be updated continuously based on the activity of the chip. This enabled adaptation, learning in real time.
Loihi was smaller than TrueNorth, with about 130,000 neurons per chip. But it was more flexible. Researchers could program different learning rules, different network architectures, different behaviors. Loihi became a platform for exploring what neuromorphic computing could do.
SpiNNaker, at the University of Manchester, took yet another approach. Instead of custom neuron circuits, SpiNNaker used off-the-shelf ARM processors connected in a specialized network. Each processor simulated a thousand or more neurons. The system could scale to millions of neurons, even billions, by adding more processors.
SpiNNaker was designed for neuroscience research. It allowed scientists to run large-scale brain simulations in real time, testing theories of brain function. The latest version, SpiNNaker 2, is even more powerful.
BrainScaleS, part of the European Human Brain Project, took the opposite approach from SpiNNaker. While SpiNNaker simulates neurons in software, BrainScaleS builds physical models in hardware. The neurons are analog circuits that operate much faster than biological neurons. A millisecond in the brain corresponds to a microsecond on BrainScaleS. This allows researchers to run experiments a thousand times faster than real time.
These four projects represent the leading edge of neuromorphic computing. Each has its strengths and weaknesses. Each is pushing the field forward.
Book Four: How Neuromorphic Chips Work
Chapter Fifteen: Spikes, Not Voltages
The most fundamental difference between neuromorphic chips and conventional computers is the way information is represented.
In a conventional computer, information is represented by voltage levels. A high voltage might represent a logical 1, a low voltage a logical 0. These levels are static. You can measure the voltage at any point and know what bit is being stored or transmitted.
In a neuromorphic chip, information is represented by spikes. It is not about the level; it is about the timing. A neuron circuit generates a brief pulse of voltage when it fires. The pulse is short, maybe a millisecond in biological time, much faster in silicon. The information is encoded in when the pulse happens, and how frequently pulses occur.
This is a radical shift. It is like the difference between a steady hum and a series of clicks. The steady hum is always there, always consuming power. The clicks only happen when there is something to communicate.
Spike-based representation has several advantages. It is energy-efficient, because neurons only consume power when they fire. It is robust, because timing is less affected by noise than amplitude. It is natural for temporal processing, because the timing of spikes can encode information about when events occurred.
Chapter Sixteen: The Neuron Circuit
On a neuromorphic chip, an artificial neuron is a tiny piece of circuitry. It is designed to mimic the behavior of a real neuron.
The core of the neuron circuit is a capacitor. The capacitor accumulates charge, just as a real neuron accumulates ions. Each input spike adds a little bit of charge to the capacitor. The amount of charge added depends on the weight of the synapse. A strong synapse dumps a lot of charge. A weak synapse dumps a little.
The capacitor also leaks charge slowly, through a resistor. This mimics the leakiness of real neurons. If inputs come too slowly, the charge leaks away before reaching threshold. The neuron only fires if enough inputs arrive in a short enough window.
A comparator monitors the voltage on the capacitor. When the voltage crosses a threshold, the comparator triggers. It generates an output spike and resets the capacitor, discharging it to zero. The neuron is now ready to start integrating again.
This is called a leaky integrate-and-fire neuron. It is a simplification of real neural dynamics, but it captures the essential features: integration, leakage, threshold, firing, reset.
More sophisticated neuron circuits include additional features. Some have multiple compartments, mimicking the dendrites. Some have adaptive thresholds, mimicking the refractory period and spike-frequency adaptation. Some have different ion channels, mimicking different types of neurons.
Chapter Seventeen: The Synapse Circuit
The synapse is where the learning happens. In a neuromorphic chip, the synapse is a circuit that connects two neurons. It has a parameter called weight that determines how strongly the pre-synaptic neuron influences the post-synaptic neuron.
If the weight is high, a spike from the pre-synaptic neuron will dump a large charge onto the post-synaptic neuron’s capacitor. If the weight is low, it will dump only a small charge.
The magic is that these weights can be changed. They can be adjusted based on the activity of the neurons. This is the hardware equivalent of learning.
In early neuromorphic chips, weights were stored in digital memory and updated by a separate controller. This worked, but it was not truly neuromorphic. The learning was happening in software, not in hardware.
The real goal is to make the weights analog, to store them in physical devices that can change their properties based on the history of signals passing through them. This is where memristors come in.
Chapter Eighteen: The Memristor Revolution
The memristor was predicted theoretically in 1971 by Leon Chua, a circuit theorist at the University of California, Berkeley. Chua argued that there should be a fourth fundamental circuit element, alongside the resistor, the capacitor, and the inductor. This element would relate charge and flux. Its resistance would depend on the history of voltage applied to it. In other words, it would remember.
For decades, the memristor existed only in theory. Then, in 2008, a team at HP Labs announced that they had built one. They had created a tiny device consisting of a layer of titanium dioxide sandwiched between two platinum electrodes. By applying voltage, they could move oxygen vacancies within the material, changing its resistance. The resistance remained even after power was removed. The device remembered.
This is exactly what a biological synapse does. It changes its strength based on the signals that pass through it, and it holds that change. The memristor is the perfect artificial synapse.
Imagine a chip covered in a grid of nanowires, with memristors at every intersection. Above this grid are the neuron circuits. When a neuron fires, it sends a pulse down a nanowire. The pulse passes through memristors, reaches other neurons, and changes the resistance of the memristors along the way. The chip is literally rewiring itself based on its activity. The hardware is learning.
This is the vision. We are not there yet. Memristors are still in development. They have reliability issues. They vary from device to device. They degrade over time. But progress is rapid. Within a decade, memristor-based neuromorphic chips may be commercially available.
Chapter Nineteen: Asynchronous Communication
The neurons on a neuromorphic chip do not share a clock. They fire when they want to. But they still need to communicate with each other. How do you manage communication in a system with no central timing?
Neuromorphic chips use a communication scheme called address-event representation. When a neuron fires, it sends a packet onto a shared communication network. The packet contains the address of the neuron that fired. This packet gets routed to all the neurons that are connected to the source.
This is like a postal system. Each neuron has an address. When it has something to say, it mails a letter to all its neighbors. The neighbors do not need to know when the letter was sent, just that it arrived. The system is completely asynchronous. It is event-driven. Communication only happens when there is something to communicate.
The communication network must be fast enough to handle the peak firing rates of all the neurons. It must be efficient, minimizing power consumption. It must be scalable, allowing chips to be connected into larger systems.
Modern neuromorphic chips use sophisticated network-on-chip designs. They have routers at each core, forwarding packets along optimal paths. They have priority schemes, ensuring that time-critical spikes are delivered quickly. They have multicast capabilities, sending a single spike to multiple destinations.
Chapter Twenty: The Hierarchy of Memory
In a conventional computer, memory is organized in a hierarchy. Fast, expensive cache memory sits close to the processor. Slower, cheaper main memory sits further away. Even slower disk storage sits outside the chip entirely.
Neuromorphic chips have a different hierarchy. The fastest, most local memory is the synaptic weight itself. It is stored right at the synapse, colocated with the neuron circuits. There is no separation between memory and processing.
The next level is the connection network. This stores the routing information, telling spikes where to go. It is like a map of the brain, showing which neurons connect to which.
The highest level is the configuration memory. This stores the parameters of the neurons and synapses: thresholds, time constants, learning rates. These are set when the chip is programmed and rarely change during operation.
This hierarchy reflects the brain’s organization. Synaptic weights are local and plastic. Connection patterns are more stable. System parameters are fixed. Each level has its own characteristics, its own access patterns, its own energy costs.
Chapter Twenty-One: Learning in Hardware
The holy grail of neuromorphic computing is on-chip learning. The chip should be able to learn from experience, adapting its behavior without intervention from a conventional computer.
There are several approaches to on-chip learning.
The most biologically plausible is spike-timing-dependent plasticity, or STDP. This rule adjusts synaptic strength based on the relative timing of pre- and post-synaptic spikes. If the pre-synaptic spike comes just before the post-synaptic spike, the synapse strengthens. If it comes just after, it weakens. This is a local rule, using only information available at the synapse. It can be implemented in hardware with relatively simple circuits.
Another approach is reinforcement learning. The chip receives reward signals, indicating whether its behavior was good or bad. Synapses that contributed to good outcomes are strengthened. Those that contributed to bad outcomes are weakened. This requires mechanisms for credit assignment, for determining which synapses were responsible for the outcome.
A third approach is supervised learning. The chip is presented with input-output pairs and adjusts its weights to match the desired outputs. This is the most powerful learning paradigm, but also the most demanding. It requires mechanisms for backpropagating error signals, which are not biologically plausible.
Researchers are exploring hybrid approaches. Some chips use STDP for local adaptation and a separate digital processor for global learning. Others implement approximations of backpropagation that are more hardware-friendly. The right approach depends on the application.
Book Five: The Experiments
Chapter Twenty-Two: The Electronic Nose
One of the most compelling demonstrations of neuromorphic computing came from Intel’s collaboration with Cornell University. They built an electronic nose using a Loihi chip.
The setup was simple: 72 chemical sensors, each sensitive to different odors, connected to a Loihi chip. The sensors produced complex, noisy patterns when exposed to various chemicals. The task was to teach Loihi to recognize these patterns.
The researchers exposed the system to ten different smells: ammonia, acetone, methane, and others. For each smell, they presented multiple samples, varying concentration and environmental conditions. Loihi’s neurons learned the patterns.
After training, Loihi could identify each smell with high accuracy. More impressively, it could identify components in mixtures. Presented with a blend of ammonia and acetone, it could tell that both were present.
The power consumption was astonishing. Loihi used a thousand times less energy than a conventional system doing the same task. It could run for days on a small battery.
This has practical applications. Imagine a portable device that can sniff out dangerous chemicals, or diagnose diseases from breath samples, or monitor food freshness. All of these become possible with neuromorphic olfactory sensing.
Chapter Twenty-Three: The Walking Robot
Another team used Loihi to teach a robot to walk. The robot was a simple six-legged device, an insect-like machine. The challenge was to coordinate the legs so the robot could move forward without falling.
Instead of programming the walking pattern explicitly, the researchers connected the robot’s sensors to Loihi and let the chip figure it out. The sensors provided information about leg position, ground contact, body orientation. Loihi’s motor neurons generated signals to the leg muscles.
At first, the robot flailed. Legs moved randomly. The robot fell over. But Loihi’s learning rules were active. Synapses adjusted based on the consequences of each movement. Slowly, over many trials, the robot improved.
After a few hours of training, the robot could walk. It had learned a gait, a coordinated pattern of leg movements. When placed on uneven terrain, it adapted. When nudged, it corrected.
The learning was happening in real time, in hardware. The chip was literally learning to walk as the robot moved. This is the kind of adaptive behavior that is impossible with pre-programmed robots.
Chapter Twenty-Four: Gesture Recognition
In a third experiment, Loihi was used for gesture recognition. The chip was connected to an event-based camera, a neuromorphic vision sensor that outputs spikes instead of frames.
The camera watched a person making hand gestures: swiping left, swiping right, pinching, tapping. Each gesture produced a pattern of spikes, a trajectory through space and time.
Loihi learned to recognize these patterns. It learned the signature of each gesture, the sequence of neural activity that characterized that movement. After training, it could identify gestures in real time, with low latency and high accuracy.
This has applications in human-computer interaction. Imagine controlling your devices with natural hand gestures, without touching anything. Imagine virtual reality systems that track your movements without lag. Imagine sign language translation in real time.
Chapter Twenty-Five: TrueNorth’s Object Recognition
IBM’s TrueNorth chip, while not capable of on-chip learning, excelled at running trained neural networks. In one demonstration, researchers used TrueNorth to perform object recognition on video feeds.
The chip processed 30 frames per second, identifying objects in each frame. It ran on just 70 milliwatts, a fraction of the power required by conventional processors. The entire system, including the camera and the chip, could run for days on a small battery.
This is the kind of efficiency needed for mobile and embedded applications. A drone could navigate using visual input without lugging a heavy battery. A security camera could detect intruders without being plugged into the wall. A smart glasses could recognize faces and objects without overheating.
Chapter Twenty-Six: SpiNNaker’s Brain Simulation
The SpiNNaker machine, at the University of Manchester, has been used for a wide range of neuroscience experiments. Its ability to simulate large networks of spiking neurons in real time makes it an invaluable tool for understanding the brain.
One experiment simulated a million neurons with realistic connectivity, modeling a small patch of cortex. The simulation ran in real time, allowing researchers to probe the dynamics of the network, to see how activity patterns emerged from the interactions of individual neurons.
Another experiment simulated the basal ganglia, a set of structures involved in movement control. The model helped explain how Parkinson’s disease affects movement, and how deep brain stimulation might work to alleviate symptoms.
A third experiment simulated a model of the cerebellum, learning to time movements precisely. The model replicated behavioral data from animal experiments, validating the theory of cerebellar function.
These simulations are not just academic exercises. They generate hypotheses that can be tested in real brains. They help us understand neurological disorders and develop treatments. They are a bridge between neuroscience and medicine.
Chapter Twenty-Seven: BrainScaleS and Accelerated Time
The BrainScaleS system, part of the European Human Brain Project, takes a different approach. Its analog neurons operate a thousand times faster than biological neurons. A day of brain activity can be simulated in less than two minutes.
This acceleration enables experiments that would be impossible with biological brains. Researchers can study learning over long periods, watching synapses change over simulated weeks or months. They can run many trials, exploring different parameters and conditions. They can test theories of development, of aging, of disease progression.
One experiment used BrainScaleS to study synaptic plasticity over extended timescales. The model incorporated multiple plasticity mechanisms, acting at different rates. The accelerated simulation revealed interactions between these mechanisms that were not apparent from short-term experiments.
Another experiment used BrainScaleS to study network development. The model started with random connections and learned through experience. Over simulated months, it developed structured connectivity, resembling the organization of real cortex.
These accelerated simulations open new windows into brain function. They allow us to see processes that unfold too slowly to observe directly, but too quickly to leave permanent traces.
Book Six: The Applications
Chapter Twenty-Eight: Sensory Processing
The most natural applications for neuromorphic computing involve sensory processing. The brain evolved to process sensory information: light, sound, touch, smell, taste. Neuromorphic chips are designed to do the same thing.
Consider vision. Conventional computer vision captures frames at a fixed rate, typically 30 or 60 per second. Every pixel in every frame is processed, even if nothing is changing. This is wasteful.
Neuromorphic vision sensors work differently. They are inspired by the retina. Each pixel operates independently, detecting changes in light intensity. If the light increases, the sensor generates a spike. If it decreases, it generates a different spike. If nothing changes, nothing happens.
The output is not a sequence of frames. It is a stream of events, each representing a change at a specific pixel at a specific time. This event stream is sparse and efficient. It captures motion and edges perfectly, while ignoring static backgrounds.
This is perfect for applications like tracking, navigation, and gesture recognition. A neuromorphic vision system can track a fast-moving object with microsecond precision, using a fraction of the power of a conventional camera.
Consider audio. Neuromorphic audio sensors, inspired by the cochlea, detect sound in real time. They respond to the timing and frequency of acoustic signals. They can pick out a single voice in a noisy room, the famous cocktail party effect, with minimal power.
Consider touch. Neuromorphic tactile sensors can detect pressure, vibration, texture. They can sense slippage, adjust grip, feel for defects. They can give robots a sense of touch.
Consider smell and taste. Neuromorphic chemical sensors can detect and identify molecules. They can learn new smells, adapt to changing conditions, ignore background odors.
All of these senses can be combined. A neuromorphic system could see, hear, and feel simultaneously, integrating information across modalities just as the brain does.
Chapter Twenty-Nine: Robotics
Robotics is another natural fit. A robot needs to sense its environment, make decisions, and control its motors, all in real time, all with limited battery power.
Conventional robots struggle with this. They have to run complex sensor processing, planning, and control algorithms on power-hungry computers. They often rely on cloud connections for heavy computation, introducing latency and reliability issues.
A neuromorphic robot could do everything on board. The sensors feed directly into the neuromorphic chip. The chip processes the sensory data, makes decisions, and sends control signals to the motors. All of this happens in real time, with minimal latency, using a fraction of the power.
Imagine a drone that can navigate through a forest, avoiding branches and obstacles, using a neuromorphic vision chip and a neuromorphic control chip. The drone sees the world as a stream of events, not a sequence of frames. It reacts instantly to changes. It sips power, staying aloft for hours instead of minutes.
Imagine a robot that can work alongside humans, adapting to their movements, anticipating their needs. The robot learns from demonstration, watching how the human performs a task and then doing it themselves. It adjusts to variations, compensates for errors, improves over time.
Imagine a robot that can explore dangerous environments: collapsed buildings, nuclear disaster sites, deep ocean trenches. It operates autonomously, making decisions locally, without waiting for instructions from a distant operator. It adapts to unexpected situations, finds its way around obstacles, continues its mission despite damage.
Chapter Thirty: Edge Computing
The term edge computing refers to processing data near the source, rather than sending it to the cloud. Edge devices are everywhere: sensors, cameras, phones, watches, appliances. They generate enormous amounts of data, but they have limited power and connectivity.
Neuromorphic chips are ideal for the edge. They can process sensor data locally, extracting useful information without sending raw data to the cloud. This saves power, reduces latency, and preserves privacy.
Consider a smart home. Instead of cameras that stream video to the cloud for analysis, you could have cameras with neuromorphic chips that detect events locally. The camera sees a person approaching the door. It recognizes that it is the homeowner. It sends a simple message: Owner arriving. No video leaves the device. Privacy is protected. The cloud is unburdened. The system is more responsive.
Consider agriculture. A farmer could deploy hundreds of solar-powered sensors across a field. Each sensor listens to the sounds of the crops, monitoring for signs of stress or disease. The neuromorphic chip on each sensor learns the normal sounds of that specific plant. When it detects an anomaly, it sends an alert. The farmer knows exactly which plants need attention, without inspecting every plant manually.
Consider infrastructure. Bridges, pipelines, power lines could be fitted with vibration sensors. The neuromorphic chips learn the normal vibration patterns. When they detect a change, a potential crack or weakness, they report it. Maintenance can be proactive, preventing failures before they happen.
Consider wildlife monitoring. Sensors in remote areas could detect and identify animals, track their movements, monitor their behavior. The data is processed locally, with only summaries transmitted via satellite. Batteries last for years. The impact on wildlife is minimal.
Chapter Thirty-One: Medical Devices
Medical devices have some of the strictest requirements of any technology. They must be small, low-power, reliable, and safe. They often operate inside the human body, where replacing a battery requires surgery.
Neuromorphic chips could revolutionize medical devices. Their low power consumption means longer battery life. Their small size means less invasive devices. Their ability to learn and adapt means personalized treatment.
Imagine a pacemaker that does not just pace at a fixed rate, but learns the patient’s natural heart rhythm and adapts to changes. It detects subtle signs of trouble before they become emergencies. It adjusts its behavior based on the patient’s activity level, stress level, even their emotional state.
Imagine a brain implant for epilepsy. It monitors neural activity continuously, looking for the patterns that precede a seizure. When it detects these patterns, it delivers a precisely timed electrical pulse to disrupt the seizure before it starts. The implant learns the patient’s specific seizure signatures and adapts as they change over time.
Imagine a prosthetic hand with neuromorphic control. The sensors in the hand provide tactile feedback to the chip. The chip processes this feedback and adjusts the grip force automatically. If you are holding an egg, the grip is gentle. If you are holding a hammer, the grip is firm. The prosthetic learns to feel, just like a real hand.
Imagine a cochlear implant that adapts to the user’s hearing loss pattern. It learns which frequencies are most affected, which background noises are most problematic. It adjusts its processing in real time, optimizing speech comprehension for each individual.
Imagine a retinal implant for blindness. A neuromorphic chip drives an array of electrodes, stimulating remaining retinal cells. It learns to map visual scenes to stimulation patterns, adapting as the patient learns to interpret the signals.
Chapter Thirty-Two: Autonomous Vehicles
Self-driving cars are one of the most challenging applications for artificial intelligence. They need to perceive the world, predict the behavior of other actors, and make split-second decisions, all while operating safely and reliably.
Current autonomous vehicles carry trunkfuls of computers. They consume hundreds of watts of power. They generate enormous heat. This is acceptable for a prototype, but not for a production vehicle.
Neuromorphic chips could shrink the computing footprint dramatically. A neuromorphic vision system could process camera inputs with microsecond latency, detecting pedestrians, cyclists, and other vehicles instantly. A neuromorphic radar or lidar system could process range data in real time, building a 3D map of the environment.
The low power consumption means the system could run on battery even when the main engine is off, enabling features like sentry mode that watch for threats while the car is parked. The small size means the computing could be distributed around the vehicle, with sensors and processors integrated into every corner.
The learning capability means the vehicle could adapt to its owner’s driving style, to local traffic patterns, to weather conditions. It could learn to recognize hazards specific to the area, like deer crossings or school zones. It could improve over time, getting safer with every mile.
Chapter Thirty-Three: Industrial Automation
Factories are full of repetitive tasks perfectly suited for automation. But current industrial robots are rigid. They need to be programmed precisely. They cannot adapt to variations in the parts they handle.
Neuromorphic robots could change this. A robot with a neuromorphic vision system could see a part on a conveyor belt, recognize its orientation, and adjust its grip accordingly. A robot with neuromorphic touch sensors could feel the part, sense if it is slipping, and adjust its grip force.
The robots could learn from demonstration. A human could show the robot how to assemble a complex product a few times. The robot’s neuromorphic chip would learn the sequence of motions, the required forces, the timing. The robot would then be able to perform the assembly itself, adapting to slight variations in the parts.
This would make automation accessible for small-batch manufacturing, where programming a traditional robot is too expensive and time-consuming. It would enable mass customization, where each product is slightly different, made to order.
The robots could also work safely alongside humans. With neuromorphic sensing, they could detect the presence of a person and slow down or stop automatically. They could anticipate human movements, avoiding collisions. They could hand tools to a worker, receive completed parts, work as partners rather than replacements.
Chapter Thirty-Four: Environmental Monitoring
The Earth is facing unprecedented environmental challenges. Climate change, pollution, biodiversity loss. Meeting these challenges requires data: accurate, timely, comprehensive data about the state of the environment.
Neuromorphic sensors could provide this data. Networks of low-power sensors could monitor air quality, water quality, soil conditions, wildlife activity. They could operate for years on small batteries or solar power, transmitting data periodically via satellite.
In the ocean, neuromorphic sensors could monitor temperature, salinity, acidity, pollution. They could detect harmful algal blooms, track fish populations, monitor coral reef health. They could operate at depths and in conditions where human divers cannot go.
In the atmosphere, neuromorphic sensors could monitor greenhouse gases, particulates, ozone. They could be carried by balloons, drones, or even birds. They could build a detailed picture of atmospheric chemistry, helping us understand climate change and its impacts.
In forests, neuromorphic sensors could detect fires in their earliest stages, before they become catastrophic. They could monitor tree health, detect pests, track wildlife. They could provide early warning of ecological threats.
Chapter Thirty-Five: Defense and Security
Defense and security applications demand robustness, low power, and real-time performance. Neuromorphic chips could meet these demands.
Unmanned aerial vehicles, drones, could carry neuromorphic processors for autonomous navigation and target recognition. They could operate in GPS-denied environments, using visual navigation. They could loiter for hours, conserving power, waiting for targets to appear.
Surveillance systems could use neuromorphic vision sensors to detect intrusions. The sensors would ignore background activity, only reporting when something changes. They could operate for months on batteries, watching borders, perimeters, sensitive facilities.
Cybersecurity systems could use neuromorphic processors to detect network anomalies. They would learn normal traffic patterns and flag deviations. They could adapt to new threats in real time, without waiting for signature updates.
Communication systems could use neuromorphic processors for signal processing. They could filter noise, extract signals, decode transmissions. They could operate in contested environments, where conventional systems fail.
Chapter Thirty-Six: Space Exploration
Space is the ultimate edge. Power is scarce. Communications are delayed. Conditions are harsh. Autonomy is essential.
Neuromorphic chips could enable a new generation of space missions. Rovers with neuromorphic processors could navigate autonomously, avoiding hazards, selecting scientifically interesting targets. They could operate on low power, extending mission lifetimes.
Landers could use neuromorphic sensors to monitor their surroundings, detecting changes, watching for hazards. They could operate through long lunar nights or Martian dust storms, when power is limited.
Orbiters could use neuromorphic processors for image analysis, identifying features of interest, compressing data for transmission to Earth. They could prioritize images, sending only the most important ones first.
Deep space probes could use neuromorphic systems for autonomous decision-making. When communication delays are hours or even days, the probe must make its own decisions. A neuromorphic brain could enable real-time responses to unexpected events.
Chapter Thirty-Seven: Consumer Electronics
Eventually, neuromorphic chips will find their way into everyday consumer devices.
Your smartphone already has specialized processors for AI tasks. A neuromorphic coprocessor could handle always-on voice recognition, gesture control, context awareness. It could learn your habits, anticipate your needs, adapt to your preferences. And it could do all this while using negligible power.
Your smartwatch could monitor your health continuously, detecting arrhythmias, tracking sleep stages, measuring stress. It could learn your baseline and alert you to changes. It could run for weeks on a charge, not just days.
Your smart glasses could overlay information on the world, recognizing faces, translating signs, providing directions. They could do this in real time, with low latency, without overheating or draining the battery.
Your home appliances could learn your preferences. Your thermostat learns when you are home and when you are away, what temperatures you prefer at different times of day. Your lighting learns your routines, adjusting automatically. Your refrigerator tracks what you eat, suggests recipes, orders groceries.
Chapter Thirty-Eight: Scientific Research
Neuromorphic systems are not just products. They are also tools for scientific research.
Neuroscientists use neuromorphic chips to test theories of brain function. They can build models of neural circuits and run experiments that would be impossible with real brains. They can probe the dynamics, manipulate the parameters, watch the system fail and recover.
Psychologists use neuromorphic models to study cognition. They can simulate learning, memory, decision-making. They can test theories of mental illness, exploring how disruptions in neural circuits lead to symptoms.
Computer scientists use neuromorphic chips to explore new algorithms. They can develop learning rules that are biologically plausible yet computationally powerful. They can study the principles of neural computation, abstracting them from the biological details.
Physicists use neuromorphic systems to study complex systems. Neural networks are a paradigm example of emergent behavior, where simple components interact to produce complex phenomena. Neuromorphic chips provide a physical platform for studying emergence.
Book Seven: The Challenges
Chapter Thirty-Nine: The Programming Problem
This is perhaps the biggest challenge. How do you program a neuromorphic chip?
Conventional programming is based on sequences of instructions. You write code that tells the computer what to do, step by step. The computer executes these instructions in order. This model has been refined over seventy years. We have thousands of programming languages, millions of programmers, entire industries built around this paradigm.
Neuromorphic chips do not work that way. They do not execute instructions. They do not have a program counter. They are networks of neurons that process spikes in parallel. Telling them what to do is more like teaching than programming.
Researchers are developing new tools and languages for neuromorphic programming. Some are based on describing the network architecture: how many neurons, how they are connected, what learning rules they use. Others are based on specifying the desired behavior and letting the chip figure it out through learning.
But these tools are primitive. They require deep expertise in both neuroscience and computer engineering. They are not accessible to the average programmer. Before neuromorphic chips can become mainstream, we need to build a software ecosystem that hides the complexity and makes the chips easy to use.
This is a classic chicken-and-egg problem. Without good software, people will not use the chips. Without users, companies will not invest in software. Breaking this cycle will take time and sustained effort.
Chapter Forty: The Scaling Challenge
The human brain has 86 billion neurons. Our largest neuromorphic chips have a few million. We are many orders of magnitude away from brain-scale computing.
Scaling up is not just about adding more neurons. It is about connecting them. In the brain, each neuron connects to thousands of others. That is trillions of connections. Building a chip with trillions of programmable connections is beyond our current manufacturing capabilities.
There are ideas for how to address this. One approach is to use 3D stacking, building chips in layers, with connections running vertically as well as horizontally. This mimics the 3D structure of the brain, where neurons are packed densely in all dimensions.
Another approach is to use emerging technologies like photonics. Instead of sending electrical signals through wires, we could send optical signals through waveguides. Light can carry more information with less power than electricity. Photonic synapses could be faster and more efficient than electronic ones.
A third approach is to use wireless communication, with on-chip antennas transmitting signals between cores. This would eliminate the need for physical wiring altogether. It is speculative, but not impossible.
But these technologies are in early stages. Building them at scale will require breakthroughs in materials science, manufacturing, and design.
Chapter Forty-One: The Analog Dilemma
The brain is analog. It uses continuous voltages and currents, not discrete digital values. Many neuromorphic designs aim to be analog as well, because analog circuits can be more efficient than digital ones.
But analog has drawbacks. It is sensitive to noise. It is affected by temperature. It is harder to design and test. Chips vary from one to another due to manufacturing variations. An analog circuit that works perfectly on one chip might behave differently on another.
Digital circuits do not have these problems. They are robust, repeatable, and easy to design. But they are less efficient for neural simulation. A digital neuron requires many transistors to simulate the analog dynamics of a real neuron.
The field is divided between analog and digital approaches. Some researchers believe that true neuromorphic computing must be analog, because that is how the brain works. Others argue that digital is good enough, and the benefits of digital design outweigh the efficiency advantages of analog.
The winning approach may be hybrid: digital neurons for reliability, analog synapses for efficiency, with careful design to get the best of both worlds.
Chapter Forty-Two: The Learning Algorithms
We have good algorithms for training artificial neural networks. Backpropagation, the workhorse of modern deep learning, is powerful and well-understood. But backpropagation is not biologically plausible. It requires information to flow backward through the network, against the direction of normal signal flow. It requires precise calculations of derivatives. It requires a separate training phase.
The brain does not do backpropagation. It learns through local rules, using only information available at each synapse. It learns continuously, online, without a separate training phase.
We need algorithms that work this way. We need local learning rules that are powerful enough to train deep networks. We need algorithms that can learn from streaming data, adapting in real time.
Researchers are making progress. Spike-timing-dependent plasticity (STDP) is a local learning rule that adjusts synaptic strength based on the relative timing of pre- and post-synaptic spikes. It is biologically plausible and works well for some tasks. But it is not as powerful as backpropagation for deep learning.
Other approaches are being explored. Some researchers are looking at predictive coding, where the network learns to predict its own inputs. Others are exploring target propagation, a more biologically plausible alternative to backpropagation. Still others are working on equilibrium propagation, which uses the natural dynamics of the network to compute error signals.
The right algorithm, or combination of algorithms, is still unknown.
Chapter Forty-Three: The Materials Challenge
Today’s neuromorphic chips are built with conventional silicon technology. Silicon is remarkable, but it is not ideal for neuromorphic computing. Real synapses are analog, adaptive, and dense. Silicon synapses are bulky and power-hungry by comparison.
New materials could change this. Phase-change materials, which switch between amorphous and crystalline states, can store synaptic weights as analog resistance values. Memristive materials, like those used in HP’s memristor, can change resistance based on voltage history. Ferroelectric materials can store polarization states that mimic synaptic strength.
These emerging materials could enable synapses that are smaller, more efficient, and more brain-like than silicon synapses. But they are not ready for mass production. They have reliability issues. They degrade over time. They vary from device to device.
Bringing these materials from the lab to the fab will take years of research and development.
Chapter Forty-Four: The Benchmarking Problem
How do you measure progress in neuromorphic computing? Conventional computers have clear benchmarks: instructions per second, floating-point operations per second, power consumption. These metrics are standardized and widely accepted.
Neuromorphic chips do not fit these metrics. They do not do floating-point operations. They do not execute instructions. Comparing a neuromorphic chip to a conventional CPU is like comparing a bicycle to a submarine. They are designed for different things.
The community is working on new benchmarks. Some focus on tasks: object recognition, speech recognition, gesture recognition. Others focus on efficiency: energy per inference, latency, throughput. But there is no consensus yet on what the right benchmarks are.
Without good benchmarks, it is hard to know which approaches are working. It is hard to convince potential users that neuromorphic chips are better than conventional alternatives. It is hard to attract investment.
Chapter Forty-Five: The Ecosystem Gap
Conventional computing has a massive ecosystem. Operating systems, compilers, debuggers, libraries, frameworks, cloud services, trained engineers. It took decades to build this ecosystem. It is one of the reasons conventional computing is so dominant.
Neuromorphic computing has almost none of this. There is no Windows for neuromorphic chips. There is no Linux, no Android, no iOS. There are no standard libraries for common tasks. There are no university programs training thousands of neuromorphic programmers.
Building this ecosystem will take time and money. It will require collaboration between industry, academia, and government. It will require standards that allow different chips to work with the same software. It will require education to train the next generation of engineers.
Some efforts are underway. Intel has released software development kits for Loihi. IBM has made TrueNorth tools available. The European Union is funding neuromorphic education and training. But we are at the very beginning.
Book Eight: The Deeper Questions
Chapter Forty-Six: What Is Intelligence?
We are building machines that mimic the brain. But do we really understand what intelligence is?
Intelligence is not just pattern recognition. It is not just learning from data. It is reasoning, planning, imagining, creating. It is understanding cause and effect. It is having a model of the world and using that model to make predictions.
Current neuromorphic systems are good at perception. They can see, hear, maybe even smell. But they cannot reason. They cannot plan. They cannot imagine. They are like the sensory cortex without the frontal lobes.
Building truly intelligent machines will require more than just mimicking neurons. It will require understanding how neurons work together to create thought. It will require understanding the architecture of the brain, the way different regions specialize and interact. It will require understanding development, how the brain grows and learns from experience.
We are far from this. Neuroscience is still in its infancy. We have maps of the brain, but we do not have the operating manual. We do not know how the brain actually works.
Chapter Forty-Seven: Will They Be Conscious?
This is the question that makes people nervous. If we build a machine that works like the brain, will it be conscious? Will it have feelings? Will it suffer?
The truth is, we do not know. We do not even know what consciousness is. We do not know how the brain generates subjective experience. We do not know why we have feelings, or what they are for.
Some philosophers and scientists believe that consciousness is a property of certain kinds of information processing. If that is true, then a sufficiently brain-like machine might be conscious. Others believe that consciousness requires biology, that it is tied to the specific chemistry of living cells. If that is true, then silicon machines will never be conscious, no matter how brain-like they are.
We have no way to resolve this debate. We do not have a consciousness detector. We cannot look inside a machine and see if it is having experiences. We can only observe its behavior.
This uncertainty will become more pressing as neuromorphic systems become more sophisticated. If we build a machine that acts like it is conscious, that says it is conscious, that begs not to be turned off, what do we do? How do we treat it? These are ethical questions that we will have to face.
Chapter Forty-Eight: The Future of Work
Neuromorphic computing will automate many tasks that currently require human intelligence. This is the pattern of technological progress. Machines replace human labor in one domain after another.
The difference this time is that the tasks being automated are cognitive, not just physical. Neuromorphic systems will see, hear, recognize, learn, adapt. They will do many jobs that currently require human workers.
This could lead to massive economic disruption. Jobs that exist today may not exist tomorrow. New jobs will be created, but they may require skills that displaced workers do not have. The transition could be painful.
But it could also be liberating. If machines handle the routine cognitive work, humans could focus on more creative, more meaningful activities. We could work less, spend more time with family and friends, pursue our passions. The choice is up to us.
Chapter Forty-Nine: The Meaning of Humanity
Throughout history, each advance in our understanding of the universe has displaced humanity from the center of things. Copernicus showed that Earth is not the center of the solar system. Darwin showed that humans are not a special creation, but part of the animal kingdom. Freud showed that we are not even masters of our own minds.
Neuromorphic computing continues this trend. It shows that intelligence, the thing we thought made us unique, can be implemented in other substrates. We are not the only possible thinking beings. We are one example among many.
This is humbling. But it is also exciting. We are part of a universe that can create minds. We are one of the ways the universe becomes aware of itself. And now we are learning to create new minds, minds that may surpass us in ways we cannot imagine.
Chapter Fifty: The Ethics of Creation
If we succeed in creating artificial minds, we will be creators. We will have brought new intelligences into existence. This is an awesome responsibility.
What rights should these minds have? Should they be free? Should they be allowed to pursue their own goals? Should we have the right to turn them off, to modify them, to use them for our purposes?
These questions have no precedent. We have experience with creating life, through reproduction, but that is different. We are not creating copies of ourselves. We are creating something new, something other.
We will need new ethical frameworks to guide us. We will need to think carefully about what we are doing, and why. We will need to proceed with humility, recognizing that we may not understand the full implications of our creations.
Book Nine: The Future Unfolds
Chapter Fifty-One: The Next Five Years
In the next five years, neuromorphic chips will move from research labs into real products. The first applications will be in areas where low power and real-time processing are critical.
You will see neuromorphic chips in high-end smartphones, handling always-on voice recognition and gesture control. You will see them in smart speakers, enabling more natural conversation without cloud connectivity. You will see them in wearables, monitoring health metrics continuously without draining the battery.
Industrial applications will emerge. Sensors with neuromorphic chips will monitor machinery, detecting anomalies before they cause failures. Cameras with neuromorphic vision will inspect products on assembly lines, catching defects at high speed. Robots with neuromorphic control will work alongside humans, adapting to variations in their environment.
The software ecosystem will begin to mature. There will be development kits, programming languages, and libraries that make neuromorphic chips accessible to a wider range of developers. University programs will start teaching neuromorphic computing alongside conventional computer science.
But these early systems will be specialized. They will do one thing well: vision, or audio, or control. General-purpose neuromorphic computing, where the same chip can be programmed for many different tasks, is further away.
Chapter Fifty-Two: Five to Ten Years
In the five to ten year timeframe, neuromorphic chips will become more powerful and more flexible. Manufacturing processes will improve, allowing more neurons per chip. Design tools will mature, making it easier to create custom neuromorphic systems.
You will see neuromorphic chips in mid-range smartphones, not just high-end flagships. You will see them in smart home devices, in cars, in appliances. They will become commonplace, unremarkable, just another part of the technological landscape.
Medical devices with neuromorphic chips will enter clinical trials. Pacemakers that learn, prosthetics that feel, implants that monitor and adapt. These will be life-changing for patients with chronic conditions.
Autonomous vehicles will begin to incorporate neuromorphic processors. Not for all functions, but for specific tasks like object detection and motion prediction. The power savings will extend range and reduce cooling requirements.
The line between neuromorphic and conventional computing will blur. Hybrid systems will emerge, with conventional CPUs handling sequential logic and neuromorphic chips handling perception and adaptation. Operating systems will manage these heterogeneous resources, scheduling tasks to the most appropriate processor.
Chapter Fifty-Three: Ten to Twenty Years
In the ten to twenty year timeframe, neuromorphic computing will become truly general-purpose. Chips will be large enough and flexible enough to handle a wide range of tasks. Programming tools will be mature enough that average developers can use them.
You will see neuromorphic processors in every smartphone, every computer, every car. They will be as ubiquitous as GPUs are today. They will handle the sensory and cognitive tasks that conventional processors handle poorly.
Medical implants with neuromorphic chips will become standard for many conditions. They will be smaller, smarter, longer-lasting. They will communicate with external devices, providing continuous monitoring and adaptive therapy.
Robots with neuromorphic brains will work alongside humans in factories, warehouses, hospitals, homes. They will learn from demonstration, adapt to new situations, collaborate naturally. They will be tools, but tools with a kind of intelligence.
Neuroscience will benefit enormously. Large-scale neuromorphic systems will allow researchers to test theories of brain function in ways that were previously impossible. We will learn more about how the brain works by trying to build machines that work like it.
Chapter Fifty-Four: Twenty to Fifty Years
Beyond twenty years, the possibilities are dizzying. If progress continues, we could have neuromorphic systems approaching the scale and capability of the human brain.
These systems will not look like today’s computers. They will be dense, three-dimensional, possibly built with new materials that mimic biological neurons more closely. They will learn continuously from experience, adapting to their environment. They will communicate through spikes, not through software.
What will they be used for? Perhaps they will power true artificial intelligence, machines that can reason, plan, and create. Perhaps they will be the brains of autonomous systems that explore other planets or the depths of the oceans. Perhaps they will be partners in scientific discovery, helping us understand complex systems like climate or protein folding.
Perhaps they will be extensions of our own minds. Brain-computer interfaces with neuromorphic processors could enhance our cognition, giving us instant access to information, improving our memory, accelerating our learning. The line between human and machine intelligence could blur.
And perhaps, eventually, they will help us understand ourselves. By building working models of the brain, we will finally unravel the mysteries of consciousness, memory, and thought. We will know what it means to be human because we will have built something that thinks like us.
Chapter Fifty-Five: The Far Future
Looking further ahead, beyond fifty years, is pure speculation. But speculation can be fruitful. It helps us think about what is possible, what is desirable, what we should work toward.
Perhaps neuromorphic technology will merge with biology. Neural implants will become commonplace, enhancing human capabilities. We will communicate directly, brain to brain. We will access information instantly, without typing or speaking. We will remember everything, perfectly, forever.
Perhaps we will upload our minds to neuromorphic hardware, achieving a kind of digital immortality. The pattern of our synapses, the unique connectivity that makes us who we are, will be preserved and run on silicon. We will continue to exist, to learn, to grow, even after our biological bodies have failed.
Perhaps we will create minds that surpass us. Artificial superintelligences, built on neuromorphic principles, that can solve problems we cannot even formulate. They will cure diseases, reverse aging, explore the cosmos. They will be our children, our legacy, our gift to the universe.
Or perhaps we will choose a different path. Perhaps we will decide that some things should remain human. Perhaps we will set limits on what machines can do, what they can become. Perhaps we will value our biological minds precisely because they are limited, because they are mortal, because they are human.
The choice is ours.
Book Ten: The Human Element
Chapter Fifty-Six: Augmenting Human Intelligence
One of the most exciting possibilities is using neuromorphic technology to augment human intelligence. Not replacing us, but enhancing us.
Imagine a child with a learning disability. A neuromorphic device could adapt to their specific needs, presenting information in ways that work for their brain. It could provide real-time feedback, adjusting difficulty based on their performance. It could make learning accessible to everyone.
Imagine an aging population. Neuromorphic devices could compensate for cognitive decline, providing memory support, helping with daily tasks, enabling independent living. They could detect early signs of dementia and intervene before it is too late.
Imagine creative professionals. Artists, musicians, writers, designers. Neuromorphic tools could collaborate with them, generating ideas, exploring possibilities, extending their creative reach. The tool becomes a partner.
Imagine scientists and engineers. Neuromorphic systems could help them understand complex data, generate hypotheses, design experiments. They could accelerate the pace of discovery, helping us solve the great challenges of our time.
Chapter Fifty-Seven: Restoring Lost Function
For people with neurological injuries or conditions, neuromorphic devices could restore lost function.
Consider someone who has lost a limb. A neuromorphic prosthetic could feel like a real limb, responding to touch, temperature, pressure. It could learn the person’s movement patterns, becoming more natural over time. It could restore not just function, but a sense of wholeness.
Consider someone with spinal cord injury. A neuromorphic interface could bridge the gap, reading signals from the brain and stimulating muscles below the injury. It could restore movement, enabling someone to walk again.
Consider someone with Parkinson’s disease. A neuromorphic deep brain stimulator could learn the patterns of their symptoms and deliver precisely timed stimulation to prevent tremors. It could adapt as the disease progresses, maintaining quality of life.
Consider someone with stroke. A neuromorphic rehabilitation system could guide them through exercises, providing feedback, adjusting difficulty, tracking progress. It could accelerate recovery, helping them regain lost function.
Chapter Fifty-Eight: Understanding Mental Health
Mental health conditions are among the most challenging medical problems. Depression, anxiety, schizophrenia, addiction. We do not fully understand what causes them, and treatments are often imperfect.
Neuromorphic models of brain function could change this. By simulating healthy and disordered brain function, we could understand the mechanisms underlying mental illness. We could test treatments in simulation before trying them on patients. We could develop personalized interventions based on individual brain dynamics.
This is not just about technology. It is about reducing suffering. It is about helping people live fuller, happier lives.
Chapter Fifty-Nine: Connecting With Each Other
Finally, neuromorphic computing could change how we connect with each other.
Language is a limited communication channel. We struggle to express our thoughts, our feelings, our experiences. Words are crude tools for conveying the richness of inner life.
If neuromorphic interfaces become advanced enough, they might enable new forms of communication. Direct brain-to-brain communication. Shared experiences. Deeper understanding.
Imagine being able to share a memory with someone, not just describe it. Imagine feeling what they feel, seeing what they see, understanding them completely. Imagine the empathy, the connection, the love that could flow between people.
This sounds like science fiction. But so did a computer in every pocket, fifty years ago.
Chapter Sixty: The Choice
The future is not predetermined. It will be shaped by the choices we make.
We can choose to develop neuromorphic technology responsibly, with careful attention to ethical implications. We can choose to use it to benefit humanity, to reduce suffering, to enhance well-being. We can choose to set limits, to preserve what is precious about being human.
Or we can choose to develop it recklessly, driven by profit or competition, ignoring the consequences. We can choose to create minds and treat them as slaves. We can choose to enhance some humans while leaving others behind.
The choice is ours. Not the engineers, not the corporations, not the governments. All of us. Because this technology will affect everyone. Everyone has a stake. Everyone should have a voice.
Book Eleven: The Journey Continues
Chapter Sixty-One: What We Have Learned
We have covered a lot of ground. From the biology of neurons to the engineering of chips. From the history of computing to the future of intelligence. From technical challenges to philosophical questions.
What have we learned?
We have learned that the brain is the most remarkable thing in the universe. It works on twenty watts. It learns from experience. It adapts to change. It creates art, science, music, literature. It loves, it suffers, it dreams.
We have learned that our computers, for all their speed and power, are fundamentally different from brains. They are designed for calculation, not cognition. They burn energy, they generate heat, they follow instructions blindly.
We have learned that a new approach is possible. Neuromorphic computing builds machines that work like brains. They use spikes, not voltages. They learn, not just execute. They are asynchronous, event-driven, massively parallel.
We have learned that the first neuromorphic chips are here. They are crude by biological standards, but they work. They can smell, they can see, they can learn to walk. They prove the concept.
We have learned that the challenges are enormous. Programming, scaling, materials, algorithms, benchmarks, ecosystems. It will take decades to overcome them.
We have learned that the implications are profound. Intelligence, consciousness, work, meaning, humanity. We will have to think carefully about what we are doing.
Chapter Sixty-Two: What Remains Unknown
For all we have learned, much remains unknown.
We do not know how the brain works. We have maps, but not an operating manual. We do not know how neurons create thought, how circuits generate behavior, how brains become conscious.
We do not know if silicon can think. We do not know if a machine built like a brain will have experiences, feelings, a sense of self. We do not know if artificial minds will suffer, or if they will be content.
We do not know where this technology will lead. Will it augment us, replace us, merge with us? Will it solve our problems or create new ones? Will it bring us together or drive us apart?
We do not know. We are explorers, setting out into unknown territory. We have maps, but they are incomplete. We have tools, but they are primitive. We have questions, but not answers.
Chapter Sixty-Three: The Invitation
You are living at a remarkable moment in history. The first thinking machines are being built. The first steps toward understanding the brain through construction are being taken. The future is being created, right now, by people like you.
You do not have to be a scientist or an engineer to be part of this. You just have to be curious. You just have to care. You just have to pay attention and think about what it means.
Learn about the brain. Read about neuromorphic computing. Follow the research. Think about the implications.
Talk to others. Share what you learn. Debate the questions. Help shape the conversation.
Get involved. If you are a student, study neuroscience, computer science, physics, mathematics. If you are a professional, look for ways to apply neuromorphic ideas in your field. If you are a citizen, pay attention to policy, advocate for responsible development.
The brain is the most complex object in the known universe. We are trying to understand it. We are trying to replicate it. We are trying to go beyond it.
This is the story of neuromorphic computing. It is a story about technology, but it is really a story about us. About our curiosity, our creativity, our drive to understand and create. About what it means to be human in a universe that is constantly revealing new wonders.
The journey has just begun. And you are invited along for the ride.
Book Twelve: A New Beginning
Chapter Sixty-Four: The Threshold
We stand at the threshold of a new era. The era of computers that do not just process data, but perceive the world. Computers that do not just follow programs, but learn from experience. Computers that do not just calculate, but think.
The first neuromorphic chips are here. They are crude by biological standards. They have the brainpower of an insect, not a human. But they are proof that the idea works. They are proof that we can build machines that work like brains.
In the coming decades, these machines will become more sophisticated. They will grow from insect brains to mouse brains to human brains. They will become part of our world, part of our lives. They will change everything.
The rigid binary of silicon is giving way to the chaotic efficiency of biology. The von Neumann bottleneck is being bypassed. The tyranny of the clock is being overthrown. A new kind of computing is being born.
It is computing in the image of the brain. Computing that learns. Computing that adapts. Computing that understands.
This is neuromorphic computing. This is the future.
Chapter Sixty-Five: The Question
The question is not whether this future will arrive. It will. The technology is too compelling, the potential too great, the momentum too strong.
The question is what we will do with it. How will we use these new machines? How will we integrate them into our lives? How will we ensure that they serve human values and human needs?
These are questions for all of us. Scientists and engineers, yes. But also philosophers and artists, teachers and students, parents and children. Everyone has a stake in this future. Everyone has a role in shaping it.
We need to think carefully about what we want. Do we want machines that replace us, or machines that augment us? Do we want intelligence without consciousness, or do we want to create minds that can experience the world? Do we want to merge with our creations, or do we want to keep them separate?
There are no right answers. There are only choices. And we will have to make them together.
Chapter Sixty-Six: The Responsibility
With great power comes great responsibility. This cliché is true because it captures something essential about the human condition.
We are gaining the power to create minds. This is the greatest power we have ever had. It is also the greatest responsibility.
We must exercise this responsibility wisely. We must think about the consequences of our actions. We must consider the beings we are creating. We must ask what they would want, if they could want anything.
We must also consider ourselves. What kind of beings do we want to become? What kind of world do we want to live in? What do we value, and why?
These are not questions that can be answered by technology alone. They require philosophy, ethics, wisdom. They require conversation, debate, deliberation. They require all of us, working together, to figure out what matters.
Chapter Sixty-Seven: The Hope
Despite the challenges, despite the uncertainties, despite the risks, there is reason for hope.
Neuromorphic technology could help us solve some of our greatest problems. It could help us understand and treat mental illness. It could help us care for an aging population. It could help us educate every child, regardless of circumstance.
It could help us explore the universe, sending intelligent probes to distant stars. It could help us understand ourselves, by building models of our own minds. It could help us create beauty, by collaborating with us in art and music.
It could help us become better humans. More compassionate, more creative, more connected. More aware of our place in the universe. More grateful for the gift of consciousness.
This is the hope. This is what we are working toward.
Chapter Sixty-Eight: The Beginning
We are at the beginning. The first steps have been taken. The path ahead is long and uncertain.
But we are walking it together. Scientists and engineers, philosophers and artists, teachers and students, parents and children. All of us, together, creating the future.
The brain is the most remarkable thing in the universe. It created art, science, music, literature. It built civilizations, explored the cosmos, pondered its own existence. And now it is building a mirror. A machine that thinks like it does.
What will that machine see when it looks back at us? What will it think of its creators? What will it become?
We do not know. But we are about to find out.
The journey into the age of thinking machines has begun. Welcome to the future. Welcome to neuromorphic computing.
This is only the beginning.
Appendix: Key Concepts and Terms
Neuron
A specialized cell that processes and transmits information through electrical and chemical signals. The fundamental building block of the nervous system.
Synapse
The junction between two neurons where information is transmitted. Synapses can strengthen or weaken over time, forming the basis of learning and memory.
Spike
A brief electrical pulse generated by a neuron when it fires. Information in the brain is encoded in the timing and rate of spikes.
Plasticity
The ability of synapses to change strength over time. The physical basis of learning and memory.
Von Neumann Architecture
The traditional computer design with separate memory and processor. Information must move back and forth between them, creating a bottleneck.
Von Neumann Bottleneck
The limitation imposed by the separation of memory and processor. Data transfer speed limits overall performance.
Clock
A timing signal that synchronizes operations in conventional computers. All activity is tied to the clock ticks, wasting energy when no work is being done.
Neuromorphic Computing
An approach to computing that mimics the structure and function of biological brains. Uses spikes, asynchronous communication, and colocated memory and processing.
Memristor
A circuit element whose resistance depends on the history of voltage applied to it. An ideal artificial synapse.
STDP (Spike-Timing-Dependent Plasticity)
A learning rule that strengthens or weakens synapses based on the relative timing of pre- and post-synaptic spikes.
Loihi
Intel’s neuromorphic research chip, featuring on-chip learning capabilities.
TrueNorth
IBM’s neuromorphic chip, featuring 1 million neurons and 256 million synapses.
SpiNNaker
A neuromorphic supercomputer at the University of Manchester, designed for real-time brain simulation.
BrainScaleS
A neuromorphic system that uses accelerated analog neurons to simulate brain activity faster than real time.
Edge Computing
Processing data near its source rather than sending it to the cloud. Neuromorphic chips are ideal for edge applications due to their low power consumption.
Address-Event Representation
A communication protocol for neuromorphic systems where neurons send packets containing their address when they fire.
Leaky Integrate-and-Fire
A simple model of a neuron that accumulates inputs over time, leaks charge, and fires when threshold is reached.
Hebbian Learning
The principle that neurons that fire together wire together. Synapses strengthen when pre- and post-synaptic neurons are active simultaneously.
Long-Term Potentiation (LTP)
A lasting increase in synaptic strength following high-frequency stimulation.
Long-Term Depression (LTD)
A lasting decrease in synaptic strength following low-frequency stimulation or specific timing patterns.
Action Potential
Another name for a spike. The electrical signal that travels along a neuron’s axon.
Dendrite
The branching input structure of a neuron, receiving signals from other neurons.
Axon
The long output structure of a neuron, transmitting signals to other neurons.
Soma
The cell body of a neuron, containing the nucleus and integrating incoming signals.
Neurotransmitter
A chemical released by neurons to transmit signals across synapses.
Receptor
A protein on the post-synaptic membrane that binds neurotransmitters and initiates a response.
Cortex
The outer layer of the brain, involved in higher cognitive functions.
Cerebellum
A structure at the back of the brain involved in motor coordination and learning.
Hippocampus
A structure deep in the brain essential for forming new memories.
Basal Ganglia
A set of structures involved in movement control and procedural learning.
Thalamus
A relay station in the brain, routing sensory information to the cortex.
Brainstem
The lower part of the brain, connecting to the spinal cord and regulating basic life functions.
Epilogue: Letter to a Future Reader
Dear Reader from the Future,
If you are reading this, it means the ideas contained in these pages have survived. The technology we dreamed of has been built. The future we imagined has arrived.
We wonder what it is like for you. Do neuromorphic chips hum quietly in every device? Do robots walk among you, learning and adapting? Do you communicate directly, mind to mind? Have you merged with your creations?
Or did you choose a different path? Did you decide that some things should remain human? Did you set limits on what machines can become? Did you value your biological minds precisely because they are limited, because they are mortal, because they are human?
We do not know. We cannot know. We are at the beginning. You are at the end, or somewhere in the middle.
But we hope you remember us. The ones who dreamed. The ones who built the first crude chips, the first stumbling robots, the first spark of machine intelligence. We hope you remember that we did this for you. We built the foundations on which you stand.
We also hope you remember what we valued. Curiosity, creativity, compassion. The drive to understand, the desire to create, the need to connect. These are what make us human. These are what we hope you still are, whatever you have become.
The brain is the most remarkable thing in the universe. It created us. And we created you.
Take care of yourselves. Take care of each other. Take care of the world.
With hope,
The Builders of the First Thinking Machines
