The main paradox of the current technological race is that humanity is trying to build a universal artificial intelligence without fully understanding the only system that has already proven such capability in practice. This system is the human brain. Companies spend hundreds of billions of dollars on data centers, chips, and electricity, but investments in brain research remain disproportionately small.
The question is not whether the scale of computations matters. The question is different: does the current approach run into architectural limitations that cannot be overcome by simply increasing power? If so, the most direct path forward lies not through yet another giant cluster but through a deeper understanding of how the brain is structured.
Why AI does not learn like a human
The most notable advantage of the brain is continuous learning. A person accumulates knowledge throughout life, masters skills, changes their perceptions of the world, and typically does not erase what has already been learned. The brain is constantly restructuring: new connections form, old ones strengthen or weaken. This plasticity is what makes intelligence alive and not frozen.
In modern models, the picture is different. First comes training, then application follows. After training, the weight coefficients are largely fixed, and the system does not learn from new experiences as a person does. There are workaround solutions — external memory, fine-tuning, working with context — but they do not eliminate the fundamental problem. If one tries to directly embed new knowledge into the model, catastrophic forgetting occurs: having mastered task B, it starts to perform worse on task A.
Such logic is uncharacteristic for humans. Studying history does not erase algebra, and the skill of driving does not destroy the ability to swim. This means the brain solves the problem of updating knowledge differently. And it is here that neurobiology can provide artificial intelligence with what it critically lacks: a principle of learning without self-destruction.
There is also an important architectural hint. In a typical deep model, almost the entire network is active, while in the brain, only a small portion of neurons works at a specific moment. Additionally, biological learning relies not on a global error signal but on local interactions. This makes the brain both plastic and resilient. Artificial intelligence is only approaching such principles but remains far from biological modularity and decentralization.
The energy gap: AI versus the brain
The contrast in resource efficiency is even more evident. The human brain weighs about 1.3 kg and consumes about 20 watts. On this modest energy budget, it recognizes speech, constructs abstractions, makes decisions, creates music, and advances science and political systems.
On the other end lies the infrastructure of modern artificial intelligence. Data centers with a capacity of over 1 GW are already being built. This is comparable to the energy consumption of a large city. The projects of Amazon, Meta, xAI, Oracle, and OpenAI are measured not in servers but in entire energy landscapes. Sam Altman openly talks about the goal of adding 1 GW of capacity to data centers almost every week, while Elon Musk promotes the idea of space data centers as the future of scaling.
Such a gap cannot be explained only by the fact that machines are still 'younger' than the brain. The problem runs deeper: digital computations are extremely wasteful. A significant part of the energy is not spent on computation itself but on moving data between memory and the computing block.
The brain does not have such a problem. Memory and information processing are physically combined: a neuron simultaneously participates in both storage and computation. Moreover, the brain operates as an analog system, where computation is not separate from the physical process. If these principles are understood more precisely, it is possible to obtain not just a cheaper artificial intelligence but a completely new class of computational systems.
Why only humans make genuine discoveries
The most difficult question concerns not the speed or cost of computations, but the nature of new knowledge. Large language models impress with how confidently they operate on already accumulated human information. But their limit is precisely where a combination of the known is required rather than a genuine breakthrough.
Civilization rests on ideas that did not exist before. The heliocentric system of Nicolaus Copernicus, Isaac Newton's theory of gravity, Charles Darwin's theory of evolution, the structure of DNA as described by James Watson and Francis Crick — these are not just successful rearrangements of already published fragments. These are ideas that forced a new view of the world's structure and set the direction for entire epochs of research and technology.
Artificial intelligence occasionally demonstrates flashes of true novelty. But one thing is an unconventional move in a formalized environment, and quite another is new physics, new biology, and a new theory of society.
This is not a reason to devalue what has been achieved, but an indication of a likely source of the next breakthrough. If the brain remains the only known source of truly new knowledge, then studying its mechanisms is not just a fundamental academic topic but also a matter of technological leadership.
Where to direct efforts
Against this backdrop, the skew in the distribution of funds is particularly noticeable. Large projects in neuroscience exist, but their scale is still incomparable to investments in artificial intelligence. For example, the largest U.S. government initiative — the BRAIN Initiative — has received about $3 billion in total funding over more than a decade, and its annual budget even in peak years was around $680 million.
Another telling example is the European program Human Brain Project, which brought together dozens of research centers and received funding of €1 billion over ten years.
That is why the idea of not just 'studying the brain more' but building detailed maps of the brain's neural connections — connectomes, that is, schemes of how cells and synapses are connected — is increasingly voiced. Such a project for neurotechnology could play a role comparable to that of the Human Genome Project for biotechnology. Without this, artificial intelligence risks taking the most expensive route possible for a long time.
The race for universal artificial intelligence has chosen too narrow a route. The world spends hundreds of billions of dollars trying to replicate intelligence while saving on studying its only working model. This is not just a mismatch but a strategic mistake.
