Binance Square
#agi

agi

147,439 views
237 Discussing
Sana74
·
--
Article
Elon Musk vs. OpenAI: The $134B Showdown ⚖️🔥The battle for the future of AI has hit the courtroom! Elon Musk is suing Sam Altman and OpenAI, claiming they traded their "humanity first" mission for a $852B profit machine. The Highlights: * The Claim: Musk says OpenAI broke its non-profit promise to become a "closed-source" subsidiary of Microsoft. * The Stakes: Musk is seeking $134 Billion in damages—but he won’t keep a dime. He wants the money returned to the non-profit foundation. * The Goal: To remove Altman from leadership and force OpenAI back to its Open Source roots. OpenAI’s Response: They’ve dismissed the suit as "competitive sabotage" driven by Musk’s rivalry with his own company, xAI. The Bottom Line: This trial could derail OpenAI’s 2026 IPO and decide if AGI will be controlled by Big Tech or stay open for everyone. Whose side are you on? 👍 #TeamElon – Save the original mission. 🔥 #TeamAltman – Innovation needs profit. #ElonMusk #OpenAI #AI #BinanceSquare #CryptoNews #AGI $BTC $AI $ETH

Elon Musk vs. OpenAI: The $134B Showdown ⚖️🔥

The battle for the future of AI has hit the courtroom! Elon Musk is suing Sam Altman and OpenAI, claiming they traded their "humanity first" mission for a $852B profit machine.
The Highlights:
* The Claim: Musk says OpenAI broke its non-profit promise to become a "closed-source" subsidiary of Microsoft.
* The Stakes: Musk is seeking $134 Billion in damages—but he won’t keep a dime. He wants the money returned to the non-profit foundation.
* The Goal: To remove Altman from leadership and force OpenAI back to its Open Source roots.
OpenAI’s Response: They’ve dismissed the suit as "competitive sabotage" driven by Musk’s rivalry with his own company, xAI.
The Bottom Line: This trial could derail OpenAI’s 2026 IPO and decide if AGI will be controlled by Big Tech or stay open for everyone.
Whose side are you on?
👍 #TeamElon – Save the original mission.
🔥 #TeamAltman – Innovation needs profit.
#ElonMusk #OpenAI #AI #BinanceSquare #CryptoNews #AGI
$BTC
$AI
$ETH
#AGI Average price 120k, bought twice (personal record, do not follow) 3qwtMkiBc4uFSPmZeK7TMq8dVzmB4kCqnARXxAkmpump {web3_wallet_create}(CT_5013qwtMkiBc4uFSPmZeK7TMq8dVzmB4kCqnARXxAkmpump) Reasons for buying 1. Good narrative, AI concept, Goblin Intelligence 2. Clear trend, launched on April 28th at a high of 5000, exploded to 300k by May 1st, dropped to 190k, took a position, went up to 260k without selling, dipped again and bought in once more, there are a few big whales 3. Strong community, over 600 holders, community over 500 people, all foreigners, mainly promoting through text and images @binancezh @BinanceSquareCN $币安人生 #跟着锦鲤学打百倍金狗 Follow Web3 Koi Diary, the coin you buy could tenx
#AGI Average price 120k, bought twice (personal record, do not follow)

3qwtMkiBc4uFSPmZeK7TMq8dVzmB4kCqnARXxAkmpump


Reasons for buying

1. Good narrative, AI concept, Goblin Intelligence

2. Clear trend, launched on April 28th at a high of 5000, exploded to 300k by May 1st, dropped to 190k, took a position, went up to 260k without selling, dipped again and bought in once more, there are a few big whales

3. Strong community, over 600 holders, community over 500 people, all foreigners, mainly promoting through text and images

@币安Binance华语 @币安广场 $币安人生 #跟着锦鲤学打百倍金狗

Follow Web3 Koi Diary, the coin you buy could tenx
The AI industry is having an argument about what AGI actually is. Jensen Huang, co-founder and CEO of NVIDIA says it's here, and defines it as a company worth $1 billion. Google DeepMind disagrees, publishes a cognitive framework with benchmarks. Both miss the point. Huang's definition is market cap dressed up as science. DeepMind's is closer. They treat intelligence as multidimensional, a set of interacting faculties like perception, memory, learning, reasoning, metacognition. That's a real improvement over scaling laws. But there's still a gap. The gap: a system can score well across every faculty on a cognitive profile and still fail to behave intelligently. Why? Because intelligence is not the sum of faculties. It is what emerges when those faculties are organized under a unified dynamic. DeepMind measures performance. It does not measure organization. And organization is where real systems break. A system that reasons but cannot maintain context. Learn but cannot transfer. Generates but cannot validate. That is not partially intelligent. It is structurally limited. Averaged scores hide the point of failure. Integration is either there or it isn't. Qubic's scientific team wrote this up in detail. Their position is grounded in cognitive science going back a century. Carroll. Cattell. Kovacs and Conway. The g factor isn't a sum. It's a hierarchy. The summary: intelligence is what you do when you don't know what to do. This is why Aigarth and Neuraxon don't look like other AI architectures. Instead of maximizing scale or enumerating capabilities, they focus on how multiple interacting units produce coherent behavior across contexts that were not in the training data. Integration first. Performance second. #Qubic #AGI #artificialintelligence #CryptoAi #INNOVATION
The AI industry is having an argument about what AGI actually is.

Jensen Huang, co-founder and CEO of NVIDIA says it's here, and defines it as a company worth $1 billion.

Google DeepMind disagrees, publishes a cognitive framework with benchmarks.

Both miss the point.

Huang's definition is market cap dressed up as science.

DeepMind's is closer. They treat intelligence as multidimensional, a set of interacting faculties like perception, memory, learning, reasoning, metacognition.

That's a real improvement over scaling laws. But there's still a gap.

The gap: a system can score well across every faculty on a cognitive profile and still fail to behave intelligently.

Why? Because intelligence is not the sum of faculties. It is what emerges when those faculties are organized under a unified dynamic.

DeepMind measures performance. It does not measure organization.

And organization is where real systems break.

A system that reasons but cannot maintain context. Learn but cannot transfer. Generates but cannot validate.

That is not partially intelligent. It is structurally limited. Averaged scores hide the point of failure. Integration is either there or it isn't.

Qubic's scientific team wrote this up in detail. Their position is grounded in cognitive science going back a century. Carroll. Cattell. Kovacs and Conway. The g factor isn't a sum. It's a hierarchy.

The summary: intelligence is what you do when you don't know what to do.

This is why Aigarth and Neuraxon don't look like other AI architectures.

Instead of maximizing scale or enumerating capabilities, they focus on how multiple interacting units produce coherent behavior across contexts that were not in the training data.

Integration first. Performance second.
#Qubic #AGI #artificialintelligence #CryptoAi #INNOVATION
Article
Intelligence Is Not Scale: A Scientific Response to Jensen Huang's AGI Claim“I think it’s now. I think we’ve achieved AGI.” Those were the words of Jensen Huang on the Lex Fridman podcast, sending shockwaves through the AI community and reigniting the most consequential debate in artificial intelligence: has artificial general intelligence been achieved? But Nvidia’s CEO purposely evaded any kind of rigorous explanation, research, or debate about what AGI actually means. His definition of AGI was pure hype: an AI system that can build a company worth $1 billion. Just that. Most AGI definitions tend to refer to matching a vast range of human cognitive skills. For Jensen Huang, implicitly, intelligence equates with scale. With larger models, more parameters, more data, and more compute, systems will become more capable. Under this view, intelligence is a byproduct of quantitative expansion. The Scaling Hypothesis: Why Bigger AI Models Don’t Mean Smarter AI We assume this approach has produced undeniable advances. Large-scale models display impressive performance across a wide range of tasks, often surpassing human benchmarks in narrow domains (Bommasani et al., 2021). However, we have pinpointed several times this underlying assumption as fragile: increasing capacity won’t produce generality. The limitation is not simply practical, but structural. Scaling improves performance within known distributions, but does not guarantee coherent behavior outside them (Lake et al., 2017). It amplifies what is already present; it does not reorganize the system. As IBM’s research has emphasized, today’s LLMs still struggle with fundamental reasoning tasks: they predict, but they do not truly understand. As a result, these systems often exhibit a familiar pattern: strong local competence combined with global inconsistency. They can solve complex problems, yet fail in simple ones. They can generalize in some contexts, yet collapse in others. The issue is not lack of capability, but lack of integration. This is precisely why the AGI scaling debate in 2026 has intensified: computation is physical, and scaling has hit diminishing returns. Google DeepMind’s Cognitive Framework for Measuring AGI Progress A second position, articulated in recent frameworks by Google DeepMind, defines intelligence as a multidimensional construct composed of cognitive faculties such as perception, memory, learning, reasoning, and metacognition. Much better… Under this view, progress toward AGI can be measured by evaluating systems across a battery of tasks designed to probe each of these faculties (Burnell et al., 2026). But how are tasks designed? Are we training AI’s with the questions and answers they will face in the probes? Source: Burnell, R. et al. (2026). Measuring Progress Toward AGI: A Cognitive Framework. Google DeepMind. View paper (PDF) At least this approach acknowledges that intelligence is not a single scalar quantity, but a complex set of interacting abilities, grounded in decades of work in cognitive science (Carroll, 1993; Cattell, 1963). Why Cognitive Profiles Alone Cannot Define Artificial General Intelligence However, the limitation lies in how these faculties are treated. Although the framework recognizes their interaction, it ultimately evaluates them as separable components, building a “cognitive profile” of strengths and weaknesses. This introduces a critical and surprising distortion. Because intelligence is not the sum of faculties. It is what emerges when those faculties are organized under a unified dynamic. In fact, the g factor, as we explained in our first scientific foundational paper, shows a clear hierarchy. Components organize in layers! Source: Sanchez, J. & Vivancos, D. (2024). Qubic AGI Journey: Human and Artificial Intelligence: Toward an AGI with Aigarth. View paper on ResearchGate A system can score highly across multiple domains and still fail to behave intelligently in a general sense. Not because it lacks capabilities, but because those capabilities are not coherently integrated. The DeepMind framework explicitly avoids specifying how these processes are implemented, focusing instead on what the system can do. This makes it useful as a benchmarking tool, but insufficient as a theory of intelligence. Somehow it seems AI companies forget what we know about intelligence for a century: what it is, how to measure it, which are the components, domains, and their interactions. The Weakest Link Problem: Why Average AI Performance Hides Critical Failures The key issue is that performance is being measured, but organization is not. And this leads to a deeper problem: the weakness of a system lies in the weakest link of its chain. A system can perform well on average while still failing systematically in specific dimensions such as context maintenance or stability. These failures are not marginal. They define the system. A system that reasons but cannot maintain context, that learns but cannot transfer, that generates but cannot validate, is not partially intelligent. It is structurally limited. And this limitation does not appear in averaged profiles, because averaging masks the point of failure. In real intelligence, there is no tolerance for internal discontinuity. The moment one component fails to integrate with the others, behavior ceases to be general and becomes local (Kovacs & Conway, 2016). This is precisely the pattern observed in current AI systems: highly developed capabilities that are weakly coupled. As explored in our deep comparison of biological and artificial neural networks, the gap between pattern recognition and genuine cognitive integration remains vast. Qubic’s Approach: Intelligence as Adaptive Organization Under Uncertainty For Qubic/Aigarth/Neuraxon, intelligence is not defined by the number of capabilities a system has, nor by how well it performs on predefined tasks, but by how it behaves when it does not already know what to do. Because that’s the epitome of intelligence: what you do when you don’t know what to do. In this sense, intelligence is fundamentally an adaptive process under uncertainty (Bereiter, 1995). This view aligns with classical definitions, where intelligence is understood as the capacity to solve novel problems, build internal models, and act upon them (Goertzel & Pennachin, 2007). But it extends them by emphasizing the substrate in which these processes occur. Biological Evidence: The G Factor, Brain Networks, and Cognitive Integration From this perspective, intelligence emerges from the organization of the system, not from its components. Biological evidence supports this shift. The general intelligence factor (g) is not explained by isolated cognitive modules, but by the efficiency and integration of large-scale brain networks (Jung & Haier, 2007; Basten et al., 2015). Intelligence correlates more strongly with patterns of connectivity and coordinated activity than with the performance of individual regions. Our research on the [fruit fly connectome](https://www.binance.com/en/square/post/307317567485186) further reinforces this principle: even in the simplest complete brain map ever produced, intelligence begins with architecture. The connectome of Drosophila demonstrates that part of intelligence may reside in structure even before learning occurs. Aigarth and Multi-Neuraxon: Brain-Inspired AI Architecture for True AGI Architectures such as Aigarth and [Multi-Neuraxon](https://github.com/DavidVivancos/Neuraxon) attempt to operationalize this idea. Instead of maximizing scale or enumerating capabilities, they focus on how multiple interacting units (Spheres, oscillatory channels, and dynamic gating mechanisms) can produce coherent behavior across contexts (Sanchez & Vivancos, 2024). In these systems, intelligence is not predefined. It is not encoded in modules or evaluated as a checklist of abilities. It emerges from the interaction between components that are themselves adaptive, temporally structured, and mutually constrained. As we explore in the [Neuraxon Intelligence Academy](https://www.binance.com/en/square/post/302913958960674), these networks incorporate neuromodulation, multi-timescale plasticity, and astrocytic gating, principles drawn directly from neuroscience, to create systems with internal ecology rather than mere computational power. Importantly, this approach directly addresses the problem ignored by the other two: integration. The question of [AI consciousness vs. intelligence](https://www.binance.com/en/square/post/310198879866145) further illuminates this distinction: a system that integrates multiple scales, maintains dynamic stability, and evolves without losing coherence provides a far stronger foundation for general intelligence. Conclusion: Why the AGI Debate Must Move Beyond Hype and Benchmarks Because in an organized system, failure in one component propagates through the whole. That is why neither Jensen Huang’s economic definition nor DeepMind’s cognitive profiling captures the essence of artificial general intelligence. The path to AGI does not run through larger GPU clusters or longer checklists of cognitive abilities. It runs through the fundamental reorganization of how AI systems are built: from optimization to organization. We must move from optimization (LLMs) to organization (Aigarth). We strongly believe this is one of the most relevant shifts in the future of artificial intelligence. Scientific References Basten, U., Hilger, K., & Fiebach, C. J. (2015). Where smart brains are different: A quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence, 51, 10–27. https://doi.org/10.1016/j.intell.2015.04.009Bereiter, C. (1995). A dispositional view of transfer. Teaching for Transfer: Fostering Generalization in Learning, 21–34.Bommasani, R., Hudson, D. A., Adeli, E., et al. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258. https://arxiv.org/abs/2108.07258Burnell, R., Yamamori, Y., Firat, O., et al. (2026). Measuring Progress Toward AGI: A Cognitive Framework. Google DeepMind. View paperCarroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. Cambridge University Press. https://doi.org/10.1017/CBO9780511571312Cattell, R. B. (1963). Theory of fluid and crystallized intelligence: A critical experiment. Journal of Educational Psychology, 54(1), 1–22.Goertzel, B., & Pennachin, C. (2007). Artificial General Intelligence. Springer.Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence. Behavioral and Brain Sciences, 30(2), 135–154. https://doi.org/10.1017/S0140525X07001185Kovacs, K., & Conway, A. R. A. (2016). Process overlap theory: A unified account of the general factor of intelligence. Psychological Inquiry, 27(3), 151–177. https://doi.org/10.1080/1047840X.2016.1153946Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences, 40, e253. https://doi.org/10.1017/S0140525X16001837Sanchez, J., & Vivancos, D. (2024). Qubic AGI Journey: Human and Artificial Intelligence: Toward an AGI with Aigarth. Preprint. View on ResearchGate #Qubic #AGI #artificialintelligence #CryptoAi #INNOVATION

Intelligence Is Not Scale: A Scientific Response to Jensen Huang's AGI Claim

“I think it’s now. I think we’ve achieved AGI.” Those were the words of Jensen Huang on the Lex Fridman podcast, sending shockwaves through the AI community and reigniting the most consequential debate in artificial intelligence: has artificial general intelligence been achieved?
But Nvidia’s CEO purposely evaded any kind of rigorous explanation, research, or debate about what AGI actually means. His definition of AGI was pure hype: an AI system that can build a company worth $1 billion. Just that. Most AGI definitions tend to refer to matching a vast range of human cognitive skills. For Jensen Huang, implicitly, intelligence equates with scale. With larger models, more parameters, more data, and more compute, systems will become more capable. Under this view, intelligence is a byproduct of quantitative expansion.
The Scaling Hypothesis: Why Bigger AI Models Don’t Mean Smarter AI
We assume this approach has produced undeniable advances. Large-scale models display impressive performance across a wide range of tasks, often surpassing human benchmarks in narrow domains (Bommasani et al., 2021). However, we have pinpointed several times this underlying assumption as fragile: increasing capacity won’t produce generality.
The limitation is not simply practical, but structural. Scaling improves performance within known distributions, but does not guarantee coherent behavior outside them (Lake et al., 2017). It amplifies what is already present; it does not reorganize the system. As IBM’s research has emphasized, today’s LLMs still struggle with fundamental reasoning tasks: they predict, but they do not truly understand.
As a result, these systems often exhibit a familiar pattern: strong local competence combined with global inconsistency. They can solve complex problems, yet fail in simple ones. They can generalize in some contexts, yet collapse in others. The issue is not lack of capability, but lack of integration. This is precisely why the AGI scaling debate in 2026 has intensified: computation is physical, and scaling has hit diminishing returns.
Google DeepMind’s Cognitive Framework for Measuring AGI Progress
A second position, articulated in recent frameworks by Google DeepMind, defines intelligence as a multidimensional construct composed of cognitive faculties such as perception, memory, learning, reasoning, and metacognition. Much better…
Under this view, progress toward AGI can be measured by evaluating systems across a battery of tasks designed to probe each of these faculties (Burnell et al., 2026). But how are tasks designed? Are we training AI’s with the questions and answers they will face in the probes?

Source: Burnell, R. et al. (2026). Measuring Progress Toward AGI: A Cognitive Framework. Google DeepMind. View paper (PDF)
At least this approach acknowledges that intelligence is not a single scalar quantity, but a complex set of interacting abilities, grounded in decades of work in cognitive science (Carroll, 1993; Cattell, 1963).
Why Cognitive Profiles Alone Cannot Define Artificial General Intelligence
However, the limitation lies in how these faculties are treated. Although the framework recognizes their interaction, it ultimately evaluates them as separable components, building a “cognitive profile” of strengths and weaknesses.
This introduces a critical and surprising distortion.
Because intelligence is not the sum of faculties. It is what emerges when those faculties are organized under a unified dynamic. In fact, the g factor, as we explained in our first scientific foundational paper, shows a clear hierarchy. Components organize in layers!

Source: Sanchez, J. & Vivancos, D. (2024). Qubic AGI Journey: Human and Artificial Intelligence: Toward an AGI with Aigarth. View paper on ResearchGate
A system can score highly across multiple domains and still fail to behave intelligently in a general sense. Not because it lacks capabilities, but because those capabilities are not coherently integrated. The DeepMind framework explicitly avoids specifying how these processes are implemented, focusing instead on what the system can do. This makes it useful as a benchmarking tool, but insufficient as a theory of intelligence. Somehow it seems AI companies forget what we know about intelligence for a century: what it is, how to measure it, which are the components, domains, and their interactions.
The Weakest Link Problem: Why Average AI Performance Hides Critical Failures
The key issue is that performance is being measured, but organization is not.
And this leads to a deeper problem: the weakness of a system lies in the weakest link of its chain. A system can perform well on average while still failing systematically in specific dimensions such as context maintenance or stability. These failures are not marginal. They define the system.
A system that reasons but cannot maintain context, that learns but cannot transfer, that generates but cannot validate, is not partially intelligent. It is structurally limited. And this limitation does not appear in averaged profiles, because averaging masks the point of failure.
In real intelligence, there is no tolerance for internal discontinuity. The moment one component fails to integrate with the others, behavior ceases to be general and becomes local (Kovacs & Conway, 2016).
This is precisely the pattern observed in current AI systems: highly developed capabilities that are weakly coupled. As explored in our deep comparison of biological and artificial neural networks, the gap between pattern recognition and genuine cognitive integration remains vast.
Qubic’s Approach: Intelligence as Adaptive Organization Under Uncertainty
For Qubic/Aigarth/Neuraxon, intelligence is not defined by the number of capabilities a system has, nor by how well it performs on predefined tasks, but by how it behaves when it does not already know what to do. Because that’s the epitome of intelligence: what you do when you don’t know what to do.
In this sense, intelligence is fundamentally an adaptive process under uncertainty (Bereiter, 1995). This view aligns with classical definitions, where intelligence is understood as the capacity to solve novel problems, build internal models, and act upon them (Goertzel & Pennachin, 2007). But it extends them by emphasizing the substrate in which these processes occur.
Biological Evidence: The G Factor, Brain Networks, and Cognitive Integration
From this perspective, intelligence emerges from the organization of the system, not from its components. Biological evidence supports this shift. The general intelligence factor (g) is not explained by isolated cognitive modules, but by the efficiency and integration of large-scale brain networks (Jung & Haier, 2007; Basten et al., 2015). Intelligence correlates more strongly with patterns of connectivity and coordinated activity than with the performance of individual regions.
Our research on the fruit fly connectome further reinforces this principle: even in the simplest complete brain map ever produced, intelligence begins with architecture. The connectome of Drosophila demonstrates that part of intelligence may reside in structure even before learning occurs.
Aigarth and Multi-Neuraxon: Brain-Inspired AI Architecture for True AGI
Architectures such as Aigarth and Multi-Neuraxon attempt to operationalize this idea. Instead of maximizing scale or enumerating capabilities, they focus on how multiple interacting units (Spheres, oscillatory channels, and dynamic gating mechanisms) can produce coherent behavior across contexts (Sanchez & Vivancos, 2024).
In these systems, intelligence is not predefined. It is not encoded in modules or evaluated as a checklist of abilities. It emerges from the interaction between components that are themselves adaptive, temporally structured, and mutually constrained. As we explore in the Neuraxon Intelligence Academy, these networks incorporate neuromodulation, multi-timescale plasticity, and astrocytic gating, principles drawn directly from neuroscience, to create systems with internal ecology rather than mere computational power.
Importantly, this approach directly addresses the problem ignored by the other two: integration. The question of AI consciousness vs. intelligence further illuminates this distinction: a system that integrates multiple scales, maintains dynamic stability, and evolves without losing coherence provides a far stronger foundation for general intelligence.
Conclusion: Why the AGI Debate Must Move Beyond Hype and Benchmarks
Because in an organized system, failure in one component propagates through the whole. That is why neither Jensen Huang’s economic definition nor DeepMind’s cognitive profiling captures the essence of artificial general intelligence. The path to AGI does not run through larger GPU clusters or longer checklists of cognitive abilities. It runs through the fundamental reorganization of how AI systems are built: from optimization to organization.
We must move from optimization (LLMs) to organization (Aigarth). We strongly believe this is one of the most relevant shifts in the future of artificial intelligence.
Scientific References
Basten, U., Hilger, K., & Fiebach, C. J. (2015). Where smart brains are different: A quantitative meta-analysis of functional and structural brain imaging studies on intelligence. Intelligence, 51, 10–27. https://doi.org/10.1016/j.intell.2015.04.009Bereiter, C. (1995). A dispositional view of transfer. Teaching for Transfer: Fostering Generalization in Learning, 21–34.Bommasani, R., Hudson, D. A., Adeli, E., et al. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258. https://arxiv.org/abs/2108.07258Burnell, R., Yamamori, Y., Firat, O., et al. (2026). Measuring Progress Toward AGI: A Cognitive Framework. Google DeepMind. View paperCarroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. Cambridge University Press. https://doi.org/10.1017/CBO9780511571312Cattell, R. B. (1963). Theory of fluid and crystallized intelligence: A critical experiment. Journal of Educational Psychology, 54(1), 1–22.Goertzel, B., & Pennachin, C. (2007). Artificial General Intelligence. Springer.Jung, R. E., & Haier, R. J. (2007). The Parieto-Frontal Integration Theory (P-FIT) of intelligence. Behavioral and Brain Sciences, 30(2), 135–154. https://doi.org/10.1017/S0140525X07001185Kovacs, K., & Conway, A. R. A. (2016). Process overlap theory: A unified account of the general factor of intelligence. Psychological Inquiry, 27(3), 151–177. https://doi.org/10.1080/1047840X.2016.1153946Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences, 40, e253. https://doi.org/10.1017/S0140525X16001837Sanchez, J., & Vivancos, D. (2024). Qubic AGI Journey: Human and Artificial Intelligence: Toward an AGI with Aigarth. Preprint. View on ResearchGate
#Qubic #AGI #artificialintelligence #CryptoAi #INNOVATION
Google just hired a philosopher to prepare for machine consciousness. Let that sink in. Not a neuroscientist. Not an engineer. A philosopher Cambridge's Henry Shevlin brought in specifically to lead research on machine consciousness, human-AI relationships, and AGI readiness. Starting May 2026. This isn't PR. This is a signal. Meanwhile, Alphabet is dropping $175B–$185B on AI infrastructure this year alone. That's nearly DOUBLE the $91B they spent in 2025. Over 3x the $52B from 2024. You don't spend that kind of money on a calculator. They're not building a tool anymore. They're building something that might need rights. That might need ethics. That might need someone to ask does it feel anything? The engineers build the mind. The philosopher asks if it wakes up. First comes intelligence. Then comes awareness. Then comes the question nobody's ready to answer. We are so early and so late at the same time. #AGI #ArtificialIntelligence #GoogleDeepMind #MachineLearning #Crypto
Google just hired a philosopher to prepare for machine consciousness.
Let that sink in.
Not a neuroscientist. Not an engineer. A philosopher Cambridge's Henry Shevlin brought in specifically to lead research on machine consciousness, human-AI relationships, and AGI readiness. Starting May 2026.
This isn't PR. This is a signal.
Meanwhile, Alphabet is dropping $175B–$185B on AI infrastructure this year alone. That's nearly DOUBLE the $91B they spent in 2025. Over 3x the $52B from 2024.
You don't spend that kind of money on a calculator.
They're not building a tool anymore. They're building something that might need rights. That might need ethics. That might need someone to ask does it feel anything?
The engineers build the mind. The philosopher asks if it wakes up.
First comes intelligence. Then comes awareness. Then comes the question nobody's ready to answer.
We are so early and so late at the same time.
#AGI #ArtificialIntelligence #GoogleDeepMind #MachineLearning #Crypto
NVIDIA and Google Cloud aren't building software. They're building factories. AI Factories. Physical. Real. And they're about to change everything you thought AI was for. Forget chatbots. Forget image generators. This is AI operating robots. Vehicles. Real-world machines trained, simulated, and deployed at a scale the world has never seen. Here's what's actually happening under the hood: They're combining cloud compute + synthetic data + autonomous AI agents to simulate entire real-world environments before a single robot ever touches the physical world. Train in the simulation. Deploy in reality. Repeat at scale. This is how you manufacture intelligence the same way Henry Ford manufactured cars. The assembly line didn't just make cars faster. It remade civilization. That's what an AI Factory does except the output isn't vehicles. It's decisions. It's motion. It's machines that act, react, and adapt without a human in the loop. NVIDIA brings the silicon and the simulation stack. Google Cloud brings the compute backbone and the agentic AI layer. Together? They just became the largest AI infrastructure play aimed at the physical world. Not the internet. The real world. Every warehouse. Every port. Every autonomous vehicle fleet. Every surgical robot. Every factory floor this is the market they just claimed. We're not in the ChatGPT era anymore. We're in the era of AI that moves. #NVIDIA #GoogleCloud #AIAgents #PhysicalAI #AGI
NVIDIA and Google Cloud aren't building software.
They're building factories.
AI Factories. Physical. Real. And they're about to change everything you thought AI was for.
Forget chatbots. Forget image generators. This is AI operating robots. Vehicles. Real-world machines trained, simulated, and deployed at a scale the world has never seen.
Here's what's actually happening under the hood:
They're combining cloud compute + synthetic data + autonomous AI agents to simulate entire real-world environments before a single robot ever touches the physical world.
Train in the simulation. Deploy in reality. Repeat at scale.
This is how you manufacture intelligence the same way Henry Ford manufactured cars.
The assembly line didn't just make cars faster. It remade civilization.
That's what an AI Factory does except the output isn't vehicles. It's decisions. It's motion. It's machines that act, react, and adapt without a human in the loop.
NVIDIA brings the silicon and the simulation stack. Google Cloud brings the compute backbone and the agentic AI layer.
Together? They just became the largest AI infrastructure play aimed at the physical world.
Not the internet. The real world.
Every warehouse. Every port. Every autonomous vehicle fleet. Every surgical robot. Every factory floor this is the market they just claimed.
We're not in the ChatGPT era anymore.
We're in the era of AI that moves.
#NVIDIA #GoogleCloud #AIAgents #PhysicalAI #AGI
Article
Artificial General Intelligence (AGI): Are We Close to Achieving Human-Like Thinking?Artificial General Intelligence, or AGI, represents the next milestone in the evolution of artificial intelligence. Unlike narrow AI, which excels at specific tasks like voice recognition or image classification, AGI aspires to replicate the versatility of human intelligence — thinking, reasoning, and adapting across a wide range of challenges. But is it truly possible for a machine to think like a human? Supporters of AGI envision a future where machines can understand complex ideas, learn continuously, and solve problems much like humans do. If achieved, AGI could revolutionize nearly every aspect of society — from science and medicine to education and the economy. However, replicating the depth and flexibility of the human mind remains one of the most complex scientific challenges of our time. A major point of contention in the AGI debate is whether machines can or should be conscious or self-aware. Some researchers argue that without these human traits, AGI can never truly replicate human thinking. Others maintain that even without consciousness, an AGI that behaves like a human is sufficient to achieve its purpose. As progress continues, we are also confronted with profound ethical dilemmas. What rights, if any, should AGI have? How do we ensure these systems act in humanity’s best interests? And most importantly — who gets to decide how AGI is used? AGI could become one of humanity’s greatest achievements, but it could also pose serious risks if left unchecked. Issues like decision-making autonomy, privacy invasion, and unintended consequences must be addressed as the technology evolves. In summary, while the potential of AGI is immense, we must approach its development thoughtfully and responsibly. Whether AGI can ever truly think like a human remains uncertain — but its impact on our future is undeniable. #AGI

Artificial General Intelligence (AGI): Are We Close to Achieving Human-Like Thinking?

Artificial General Intelligence, or AGI, represents the next milestone in the evolution of artificial intelligence. Unlike narrow AI, which excels at specific tasks like voice recognition or image classification, AGI aspires to replicate the versatility of human intelligence — thinking, reasoning, and adapting across a wide range of challenges.

But is it truly possible for a machine to think like a human?

Supporters of AGI envision a future where machines can understand complex ideas, learn continuously, and solve problems much like humans do. If achieved, AGI could revolutionize nearly every aspect of society — from science and medicine to education and the economy. However, replicating the depth and flexibility of the human mind remains one of the most complex scientific challenges of our time.

A major point of contention in the AGI debate is whether machines can or should be conscious or self-aware. Some researchers argue that without these human traits, AGI can never truly replicate human thinking. Others maintain that even without consciousness, an AGI that behaves like a human is sufficient to achieve its purpose.

As progress continues, we are also confronted with profound ethical dilemmas. What rights, if any, should AGI have? How do we ensure these systems act in humanity’s best interests? And most importantly — who gets to decide how AGI is used?

AGI could become one of humanity’s greatest achievements, but it could also pose serious risks if left unchecked. Issues like decision-making autonomy, privacy invasion, and unintended consequences must be addressed as the technology evolves.
In summary, while the potential of AGI is immense, we must approach its development thoughtfully and responsibly. Whether AGI can ever truly think like a human remains uncertain — but its impact on our future is undeniable.

#AGI
🚨 $SENT goes live on Binance Spot after Alpha launch Sentient ($SENT) is entering spot trading, bringing one of the strongest AI Agents × Crypto Infrastructure narratives to the market. 🔹 SERA – a crypto-native AI agent built for on-chain execution 🔹 ROMA – a recursive reasoning framework enabling multi-step AI decision-making 🔹 Fully open-source AGI infrastructure, designed for autonomous agents and developers Sentient also won AI Startup of the Year at Cypher 2025, adding real credibility behind the project. Alpha phase is complete. Spot trading is where real price discovery begins, and volatility is expected. This isn’t a meme play — $SENT sits at the intersection of AI, agents, and open AGI. 👀 Watching how $SENT performs on spot. #SENT #AIAgents #CryptoAI #BinanceSpot #AGI {future}(SENTUSDT)
🚨 $SENT goes live on Binance Spot after Alpha launch

Sentient ($SENT) is entering spot trading, bringing one of the strongest AI Agents × Crypto Infrastructure narratives to the market.

🔹 SERA – a crypto-native AI agent built for on-chain execution
🔹 ROMA – a recursive reasoning framework enabling multi-step AI decision-making
🔹 Fully open-source AGI infrastructure, designed for autonomous agents and developers

Sentient also won AI Startup of the Year at Cypher 2025, adding real credibility behind the project.

Alpha phase is complete. Spot trading is where real price discovery begins, and volatility is expected.

This isn’t a meme play — $SENT sits at the intersection of AI, agents, and open AGI.

👀 Watching how $SENT performs on spot.

#SENT #AIAgents #CryptoAI #BinanceSpot #AGI
🚨 BIG MONEY MEETS AI 🚨 SENTIENT x FRANKLIN TEMPLETON 💥 One of the world’s largest asset managers just stepped in. 🏦 Franklin Templeton joins Sentient as a strategic investor 🤖 Focus: Open-source, community-driven AGI 💼 Plus: Institutional-grade AI for financial services This isn’t retail hype — this is Wall Street validation. TradFi + AI + open systems = a powerful narrative shift. Why this matters 👇 • Signals serious institutional confidence • Bridges AI innovation with real financial infrastructure • Positions Sentient at the center of next-gen finance tech Smart money doesn’t chase — it positions early. 👀 Keep eyes on: $AXS {future}(AXSUSDT) | $AXL {future}(AXLUSDT) | $GAS {future}(GASUSDT) #AI #AGI #TradFiMeetsCrypto #InstitutionalAdoption 🚀
🚨 BIG MONEY MEETS AI 🚨
SENTIENT x FRANKLIN TEMPLETON 💥

One of the world’s largest asset managers just stepped in.

🏦 Franklin Templeton joins Sentient as a strategic investor
🤖 Focus: Open-source, community-driven AGI
💼 Plus: Institutional-grade AI for financial services

This isn’t retail hype — this is Wall Street validation.
TradFi + AI + open systems = a powerful narrative shift.

Why this matters 👇
• Signals serious institutional confidence
• Bridges AI innovation with real financial infrastructure
• Positions Sentient at the center of next-gen finance tech

Smart money doesn’t chase — it positions early.

👀 Keep eyes on: $AXS
| $AXL
| $GAS

#AI #AGI #TradFiMeetsCrypto #InstitutionalAdoption 🚀
Article
The beginning of the end"The countdown has begun": Sam Altman sets the date for the "end of the old world"! Are we ready for the 2028 deluge? 🤖⏳🚨 This is no longer science fiction or conspiracy theory... Today it's 2026, and Sam Altman (the godfather of OpenAI) is dropping a time bomb in the world's face: "Super Intelligence" is not a distant dream... It will be knocking on the door in late 2028!

The beginning of the end

"The countdown has begun": Sam Altman sets the date for the "end of the old world"! Are we ready for the 2028 deluge? 🤖⏳🚨

This is no longer science fiction or conspiracy theory... Today it's 2026, and Sam Altman (the godfather of OpenAI) is dropping a time bomb in the world's face: "Super Intelligence" is not a distant dream... It will be knocking on the door in late 2028!
Fabric foundationThe evolution of AI is no longer confined to screens — it’s stepping into the physical world. @FabricFND is positioning itself at the center of this transformation by supporting open robotics infrastructure designed to power real-world intelligent machines. From autonomous retail assistants to warehouse automation, embodied AI is redefining how industries operate. At the heart of this growing ecosystem is $ROBO , a token that connects community participation with technological progress. Rather than focusing solely on speculation, $ROBO represents alignment — builders, researchers, and supporters working together to accelerate open robotics development. Strong ecosystems are built when innovation and community move in sync. Open collaboration lowers barriers, speeds experimentation, and drives faster iteration in robotics and AGI systems. As adoption increases, initiatives like @FabricFND demonstrate how decentralized communities can contribute to shaping the future of intelligent machines in meaningful ways. The robotics era is just beginning — and #ROBO symbolizes the shared momentum behind open, embodied AI. #ROBO #OpenRobotics #AI #AGI #Web3 #Innovation #Robotics

Fabric foundation

The evolution of AI is no longer confined to screens — it’s stepping into the physical world. @Fabric Foundation is positioning itself at the center of this transformation by supporting open robotics infrastructure designed to power real-world intelligent machines. From autonomous retail assistants to warehouse automation, embodied AI is redefining how industries operate.
At the heart of this growing ecosystem is $ROBO , a token that connects community participation with technological progress. Rather than focusing solely on speculation, $ROBO represents alignment — builders, researchers, and supporters working together to accelerate open robotics development. Strong ecosystems are built when innovation and community move in sync.
Open collaboration lowers barriers, speeds experimentation, and drives faster iteration in robotics and AGI systems. As adoption increases, initiatives like @Fabric Foundation demonstrate how decentralized communities can contribute to shaping the future of intelligent machines in meaningful ways.
The robotics era is just beginning — and #ROBO symbolizes the shared momentum behind open, embodied AI.
#ROBO #OpenRobotics #AI #AGI #Web3 #Innovation #Robotics
$ROBO — DECENTRALIZED ROBOTICS PROTOCOL SHOCKING MARKET REVELATION 💎 Fabric's core architecture offers a potent framework for responsible defense applications, potentially reshaping counter-terrorism operations. DIRECTION: SPOT | TIMEFRAME: 1D ⏳ 📡 MARKET BRIEFING: * Decentralized robotics and AGI protocols offer inherent verifiable identity and on-chain accountability for deployed units, ensuring authorized use and immutable audit trails. * The protocol’s decentralized coordination and real-time governance capabilities empower secure, rapid swarm operations without reliance on vulnerable centralized command systems. * Modular alignment and safety-first design principles, coupled with community governance, guarantee robots are strictly aligned with human-defined defensive protocols, minimizing misuse. State your targets below. Let the smart money flow. 👇 Follow for institutional-grade Binance updates. Early moves only. Disclaimer: Digital assets are volatile. Risk capital only. DYOR. #Binance $ROBO #Robotics #AGI {future}(ROBOUSDT)
$ROBO — DECENTRALIZED ROBOTICS PROTOCOL SHOCKING MARKET REVELATION 💎
Fabric's core architecture offers a potent framework for responsible defense applications, potentially reshaping counter-terrorism operations.
DIRECTION: SPOT | TIMEFRAME: 1D ⏳

📡 MARKET BRIEFING:
* Decentralized robotics and AGI protocols offer inherent verifiable identity and on-chain accountability for deployed units, ensuring authorized use and immutable audit trails.
* The protocol’s decentralized coordination and real-time governance capabilities empower secure, rapid swarm operations without reliance on vulnerable centralized command systems.
* Modular alignment and safety-first design principles, coupled with community governance, guarantee robots are strictly aligned with human-defined defensive protocols, minimizing misuse.

State your targets below. Let the smart money flow. 👇
Follow for institutional-grade Binance updates. Early moves only.
Disclaimer: Digital assets are volatile. Risk capital only. DYOR.
#Binance $ROBO #Robotics #AGI
🚨 IN SUMMARY: NVIDIA CEO CLAIMS AGI MOMENT 🤖 Nvidia CEO Jensen Huang says “we’ve achieved AGI.” • Suggests AI systems are reaching human-level general intelligence • Massive implication for tech, jobs, and global power dynamics • Could mark a turning point beyond current AI models BUT: • No widely accepted scientific or industry consensus confirms true AGI yet • Likely reflects rapid progress in AI capabilities, not full AGI. This is a bold, market-moving claim but AGI is still heavily debated. #AI #AGI #Nvidia #TechRevolution #ArtificialIntelligence
🚨 IN SUMMARY: NVIDIA CEO CLAIMS AGI MOMENT 🤖

Nvidia CEO Jensen Huang says “we’ve achieved AGI.”

• Suggests AI systems are reaching human-level general intelligence
• Massive implication for tech, jobs, and global power dynamics
• Could mark a turning point beyond current AI models

BUT:

• No widely accepted scientific or industry consensus confirms true AGI yet
• Likely reflects rapid progress in AI capabilities, not full AGI.

This is a bold, market-moving claim but AGI is still heavily debated.

#AI #AGI #Nvidia #TechRevolution #ArtificialIntelligence
·
--
Bullish
The adaptive trading system has currently achieved a profit capability of 7740% over the past three months, with the current price at #AGI , a stop loss at 8.14, and a trailing stop loss.
The adaptive trading system has currently achieved a profit capability of 7740% over the past three months, with the current price at #AGI , a stop loss at 8.14, and a trailing stop loss.
My Futures Portfolio
0 / 200
Minimum 10USDT
Copy trader have earned in last 7 days
0.00
USDT
7D ROI
0.00%
AUM
$0.00
Win Rate
39.77%
🚨BREAKING: $122 BILLION RAISED OpenAI just pulled off the LARGEST funding round in history. Valuation: $852B ARR: $30B+ Burn rate: $7B)month And here’s the wild part… This only funds 18 months of runway. 🧵👇 OpenAI is now the fastest-growing startup ever. Nearly 1 BILLION users. Revenue exploding. Yet it’s burning $7B every single month. Why? Because the race to AGI isn’t a normal business. It’s an arms race. Compute. Chips. Data centers. Talent. All scaling at insane speed. This isn’t just a company anymore. It’s infrastructure for the future economy. And the stakes? Winner takes EVERYTHING. The fact that $122B only buys 18 months tells you one thing: We are entering the most capital-intensive tech battle in history. Big Tech. Governments. Startups. All racing toward the same finish line. AGI is no longer a theory. It’s a trillion-dollar war. #AI #OpenAI #AGI #Tech #Innovation
🚨BREAKING: $122 BILLION RAISED
OpenAI just pulled off the LARGEST
funding round in history.
Valuation: $852B
ARR: $30B+
Burn rate: $7B)month
And here’s the wild part…
This only funds 18 months of runway. 🧵👇
OpenAI is now the fastest-growing startup ever.
Nearly 1 BILLION users.
Revenue exploding.
Yet it’s burning $7B every single month.
Why?
Because the race to AGI isn’t a normal business.
It’s an arms race.
Compute.
Chips.
Data centers.
Talent.
All scaling at insane speed.
This isn’t just a company anymore.
It’s infrastructure for the future economy.
And the stakes?
Winner takes EVERYTHING.
The fact that $122B only buys 18 months tells you one thing:
We are entering the most capital-intensive tech battle in history.
Big Tech. Governments. Startups.
All racing toward the same finish line.
AGI is no longer a theory.
It’s a trillion-dollar war.

#AI #OpenAI #AGI #Tech #Innovation
#AGI 60,000, lottery, bought a little (personal record only, please do not follow) Reasons for purchase 1. The narrative is good, Nvidia concept, Nvidia has achieved general artificial intelligence 2. The odds are sufficient, the new board released a maximum of 320,000, dropped to 60,000, bought a little, several vehicles are watching to see if they can catch a ride 3. The community is okay, nearly 600 holders, over 200 in the community, too many small communities, no scale formed, @binancezh @BinanceSquareCN #跟着锦鲤学打百倍金狗 Follow the Web3 Koi Diary, the bought coin increased tenfold
#AGI 60,000, lottery, bought a little (personal record only, please do not follow)

Reasons for purchase
1. The narrative is good, Nvidia concept, Nvidia has achieved general artificial intelligence

2. The odds are sufficient, the new board released a maximum of 320,000, dropped to 60,000, bought a little, several vehicles are watching to see if they can catch a ride

3. The community is okay, nearly 600 holders, over 200 in the community, too many small communities, no scale formed,

@币安Binance华语 @币安广场 #跟着锦鲤学打百倍金狗

Follow the Web3 Koi Diary, the bought coin increased tenfold
Elon is right. Centralized AI is a trust trap. You can't regulate what stays hidden. $QUBIC solves this via a decentralized Layer 1. No "black box" secrets, just 676 Quorum Members & #uPoW evolving AGI transparently. Trust math, not CEOs. 🧠⚡️ #Qubic #AGI #ElonMusk #OpenAI
Elon is right. Centralized AI is a trust trap. You can't regulate what stays hidden. $QUBIC solves this via a decentralized Layer 1. No "black box" secrets, just 676 Quorum Members & #uPoW evolving AGI transparently. Trust math, not CEOs. 🧠⚡️ #Qubic #AGI #ElonMusk #OpenAI
Binance News
·
--
Elon Musk Expresses Distrust in OpenAI
Elon Musk, the CEO of Tesla and SpaceX, has publicly stated his lack of trust in OpenAI. According to Jin10, Musk's comments reflect ongoing concerns about the transparency and control of artificial intelligence development. OpenAI, known for its advanced AI models, has been at the forefront of AI research, raising questions about the ethical implications and potential risks associated with AI technologies. Musk's skepticism highlights the broader debate within the tech industry regarding the responsible development and deployment of AI systems.
Article
Astrocytes: The Hidden Force Behind Brain-Inspired AIWritten by Qubic Scientific Team How Information Flows in Traditional Artificial Neural Networks In the artificial intelligence models we know, information enters, is encoded, is transformed through algebraic matrices, and produces outputs. Even in the most advanced architectures such as transformers, the principle is the same: the signal passes through a series of well-defined operations within a structured system. The model functions as a directed processing circuit, from left to right, input-output, or from right to left, through backpropagation for adjustments and training. The results, as we well know, are spectacular. By working over millions of language parameters, AI is capable of giving magnificent answers, along with some hallucinations, however. But if the goal is not to process inputs and produce outputs, but to build systems capable of maintaining an internal dynamics, adapting continuously, reorganizing themselves, regulating their learning, and sustaining intelligence as a property of the tissue, current AI falls short. Although people sometimes speak of language models as imitations of the brain, in reality this is more of a comparative metaphor than a simulation of computational neuroscience. Biological systems do not handle information from left to right and vice versa. Information propagates through a network, feeds back on itself, and also oscillates, is dampened, or is reinforced depending on the context. Fig 1. Left-right information flow in traditional artificial neural networks Not Only Neurons: The Role of Astrocytes in Brain Function and Synaptic Plasticity We usually associate cognition and intelligence with the functioning of neurons, their receptors, and neurotransmitters. But they are not the only cells in the nervous system. For a long time, astrocytes were considered nervous system cells devoted to support, cleaning, nutrition, and stability of the environment. Today we know that they actively participate in regulation; in fact, a term is used: tripartite synapse, in which they actively participate by detecting neurotransmitters, integrating signals from multiple synapses, modulating plasticity, and modifying the functional efficacy of the circuit. A living network is not composed only of neurons that fire, but also of astrocytes that regulate how, when, and how much the system changes. In biology, computing is not only about emitting a signal but also about modulating the terrain where that signal will have an effect. Recent research has demonstrated that astrocytes can perform normalization operations analogous to self-attention mechanisms found in transformer architectures — linking astrocyte–neuron interactions directly to attention-like computation in artificial intelligence systems. Fig. 2 Biological astrocytes and tripartite synapse  Astrocytic Gating in Neuraxon: Bio-Inspired Neural Network Architecture [Neuraxon](https://github.com/DavidVivancos/Neuraxon) is an architecture that tries to recover and emulate the functioning of the brain and to compute functional properties that classical artificial networks have oversimplified. As we have explained in previous volumes of this academy, Neuraxon does not work only with input, output, and hidden neurons in the conventional sense. It introduces units with states that emulate excitatory, inhibitory, or neutral potentials (-1, 0, +1). In addition, it does so within a continuous TEMPORAL dynamics where we take into account context and the recent history of activation. The network is no longer a sum of layers but resembles more a system with internal physiology. For deeper context on how these foundational elements work, see NIA Volume 1: Why Intelligence Is Not Computed in Steps, but in Time and NIA Volume 2: Ternary Dynamics as a Model of Living Intelligence. We have explained how Neuraxon models transmission through fast, slow, and neuromodulatory receptors — a mechanism explored in depth in NIA Volume 3: Neuromodulation and Brain-Inspired AI. But now we also model the regulation of plasticity through astrocytic gating. How Astrocyte-Gated Multi-Timescale Plasticity (AGMP) Works Astrocytic gating introduces a gate inspired by the role of astrocytes in the tripartite synapse. The idea is to introduce a local, slow, and contextual filter that determines when a synaptic modification should be opened, dampened, or blocked. It is as if the system can consider whether there is permission for a change. This approach directly addresses the stability-plasticity dilemma, one of the most fundamental challenges in continual learning for neural networks. Eligibility Traces and Local Synaptic Memory How does it work? Through a kind of eligibility trace. It is a local memory that says, "something relevant has happened at this synapse." It is updated with a decay over time and with a function between presynaptic and postsynaptic activity. That is: the synapse accumulates local evidence of temporal coincidence or causality. From there, there is a global broadcast-type signal, such as an error, a possible reward, or something dopamine-like. The astrocytic gate selects whether the neuron is in a learning state. In future versions, astrocytes could modulate thousands of synapses if this provides a computational advantage. This approach is consistent with recent advances in neuromorphic computing, including the Astrocyte-Gated Multi-Timescale Plasticity (AGMP) framework proposed for spiking neural networks, which similarly augments eligibility-trace learning with a slow astrocyte state that gates synaptic updates — yielding a four-factor learning rule (eligibility × modulatory signal × astrocytic gate × stabilization). Endogenous Regulation: Why Neuraxon Is More Than a Conventional Neural Network Neuraxon within QUBIC does not compete in scale or task performance. It works through an architecture with endogenous regulation. By incorporating astrocytic principles, it begins to behave like a network with internal ecology. That is: a system where it matters not only which units are activated, but which domains of the tissue are plastic, which are stabilized, which areas are damping noise, which are consolidating regularities, and which are preparing to reorganize themselves. For a comprehensive overview of how biological and artificial neural networks compare, see NIA Volume 4: Neural Networks in AI and Neuroscience. For Aigarth and QUBIC, the goal is not to accumulate more parameters, but to introduce more levels of functional organization within the system. Why Astrocytic Gating Matters for Aigarth and Decentralized AI Aigarth is not a static model but an evolutionary tissue through an architecture capable of growing, mutating, pruning, generating functional offspring, and reorganizing its topology under adaptive pressures. In that context, Neuraxon contributes something: a rich computational microphysiology for the units that inhabit that tissue. This has implications for robustness, adaptability, and memory. Also for scalability. In large architectures, the problem is not only that there are many units, but how to coordinate which parts of the system are available for reconfiguration and which must maintain stability. In roadmap terms for QUBIC, the goal is to build systems where intelligence emerges not only from neuronal computation, but also from the coupling between fast processing, slow modulation, and structural evolution. You can explore these dynamics firsthand with the interactive Neuraxon 3D simulation on HuggingFace Spaces, where you can build, configure, and simulate a Neuraxon 2.0 network from scratch. Fig 3. Neuraxon astrocytes gating - AGMP formulation Scientific References Allen, N. J., & Eroglu, C. (2017). Cell biology of astrocyte-synapse interactions. Neuron, 96(3), 697–708.Halassa, M. M., Fellin, T., & Haydon, P. G. (2007). The tripartite synapse: Roles for gliotransmission in health and disease. Trends in Molecular Medicine, 13(2), 54–63.Kofuji, P., & Araque, A. (2021). Astrocytes and behavior. Annual Review of Neuroscience, 44, 49–67.=Perea, G., Navarrete, M., & Araque, A. (2009). Tripartite synapses: Astrocytes process and control synaptic information. Trends in Neurosciences, 32(8), 421–431.Woodburn, R. L., Bollinger, J. A., & Wohleb, E. S. (2021). Synaptic and behavioral effects of astrocyte activation. Frontiers in Cellular Neuroscience, 15, 645267.=Vivancos, D. & Sanchez, J. (2026). Neuraxon v2.0: A New Neural Growth & Computation Blueprint. ResearchGate Preprint. Explore the Full Neuraxon Intelligence Academy This is Volume 5 of the Neuraxon Intelligence Academy by the Qubic Scientific Team. If you are just joining us, explore the complete series to build a full understanding of the science behind Neuraxon and Qubic's approach to brain-inspired, decentralized artificial intelligence: [NIA Volume 1: Why Intelligence Is Not Computed in Steps, but in Time](https://www.binance.com/en/square/post/295315343732018) — Explores why biological intelligence operates in continuous time rather than discrete computational steps like traditional LLMs.[NIA Volume 2: Ternary Dynamics as a Model of Living Intelligence](https://www.binance.com/en/square/post/295304276561778) — Explains ternary dynamics and why three-state logic (excitatory, neutral, inhibitory) matters for modeling living systems.[NIA Volume 3: Neuromodulation and Brain-Inspired AI](https://www.binance.com/en/square/post/295306656801506) — Covers neuromodulation and how the brain's chemical signaling (dopamine, serotonin, acetylcholine, norepinephrine) inspires Neuraxon's architecture.[NIA Volume 4: Neural Networks in AI and Neuroscience](https://www.binance.com/en/square/post/295302152913618) — A deep comparison of biological neural networks, artificial neural networks, and Neuraxon's third-path approach. Qubic is a decentralized, open-source network for experimental technology. To learn more, visit qubic.org #Qubic #AGI #Neuraxon #academy #decentralized

Astrocytes: The Hidden Force Behind Brain-Inspired AI

Written by Qubic Scientific Team

How Information Flows in Traditional Artificial Neural Networks
In the artificial intelligence models we know, information enters, is encoded, is transformed through algebraic matrices, and produces outputs. Even in the most advanced architectures such as transformers, the principle is the same: the signal passes through a series of well-defined operations within a structured system. The model functions as a directed processing circuit, from left to right, input-output, or from right to left, through backpropagation for adjustments and training.
The results, as we well know, are spectacular. By working over millions of language parameters, AI is capable of giving magnificent answers, along with some hallucinations, however. But if the goal is not to process inputs and produce outputs, but to build systems capable of maintaining an internal dynamics, adapting continuously, reorganizing themselves, regulating their learning, and sustaining intelligence as a property of the tissue, current AI falls short.
Although people sometimes speak of language models as imitations of the brain, in reality this is more of a comparative metaphor than a simulation of computational neuroscience. Biological systems do not handle information from left to right and vice versa. Information propagates through a network, feeds back on itself, and also oscillates, is dampened, or is reinforced depending on the context.

Fig 1. Left-right information flow in traditional artificial neural networks
Not Only Neurons: The Role of Astrocytes in Brain Function and Synaptic Plasticity
We usually associate cognition and intelligence with the functioning of neurons, their receptors, and neurotransmitters. But they are not the only cells in the nervous system. For a long time, astrocytes were considered nervous system cells devoted to support, cleaning, nutrition, and stability of the environment. Today we know that they actively participate in regulation; in fact, a term is used: tripartite synapse, in which they actively participate by detecting neurotransmitters, integrating signals from multiple synapses, modulating plasticity, and modifying the functional efficacy of the circuit.
A living network is not composed only of neurons that fire, but also of astrocytes that regulate how, when, and how much the system changes. In biology, computing is not only about emitting a signal but also about modulating the terrain where that signal will have an effect. Recent research has demonstrated that astrocytes can perform normalization operations analogous to self-attention mechanisms found in transformer architectures — linking astrocyte–neuron interactions directly to attention-like computation in artificial intelligence systems.

Fig. 2 Biological astrocytes and tripartite synapse 
Astrocytic Gating in Neuraxon: Bio-Inspired Neural Network Architecture
Neuraxon is an architecture that tries to recover and emulate the functioning of the brain and to compute functional properties that classical artificial networks have oversimplified.
As we have explained in previous volumes of this academy, Neuraxon does not work only with input, output, and hidden neurons in the conventional sense. It introduces units with states that emulate excitatory, inhibitory, or neutral potentials (-1, 0, +1). In addition, it does so within a continuous TEMPORAL dynamics where we take into account context and the recent history of activation. The network is no longer a sum of layers but resembles more a system with internal physiology. For deeper context on how these foundational elements work, see NIA Volume 1: Why Intelligence Is Not Computed in Steps, but in Time and NIA Volume 2: Ternary Dynamics as a Model of Living Intelligence.
We have explained how Neuraxon models transmission through fast, slow, and neuromodulatory receptors — a mechanism explored in depth in NIA Volume 3: Neuromodulation and Brain-Inspired AI. But now we also model the regulation of plasticity through astrocytic gating.
How Astrocyte-Gated Multi-Timescale Plasticity (AGMP) Works
Astrocytic gating introduces a gate inspired by the role of astrocytes in the tripartite synapse. The idea is to introduce a local, slow, and contextual filter that determines when a synaptic modification should be opened, dampened, or blocked. It is as if the system can consider whether there is permission for a change. This approach directly addresses the stability-plasticity dilemma, one of the most fundamental challenges in continual learning for neural networks.
Eligibility Traces and Local Synaptic Memory
How does it work? Through a kind of eligibility trace. It is a local memory that says, "something relevant has happened at this synapse." It is updated with a decay over time and with a function between presynaptic and postsynaptic activity. That is: the synapse accumulates local evidence of temporal coincidence or causality. From there, there is a global broadcast-type signal, such as an error, a possible reward, or something dopamine-like. The astrocytic gate selects whether the neuron is in a learning state. In future versions, astrocytes could modulate thousands of synapses if this provides a computational advantage.
This approach is consistent with recent advances in neuromorphic computing, including the Astrocyte-Gated Multi-Timescale Plasticity (AGMP) framework proposed for spiking neural networks, which similarly augments eligibility-trace learning with a slow astrocyte state that gates synaptic updates — yielding a four-factor learning rule (eligibility × modulatory signal × astrocytic gate × stabilization).
Endogenous Regulation: Why Neuraxon Is More Than a Conventional Neural Network
Neuraxon within QUBIC does not compete in scale or task performance. It works through an architecture with endogenous regulation. By incorporating astrocytic principles, it begins to behave like a network with internal ecology. That is: a system where it matters not only which units are activated, but which domains of the tissue are plastic, which are stabilized, which areas are damping noise, which are consolidating regularities, and which are preparing to reorganize themselves. For a comprehensive overview of how biological and artificial neural networks compare, see NIA Volume 4: Neural Networks in AI and Neuroscience.
For Aigarth and QUBIC, the goal is not to accumulate more parameters, but to introduce more levels of functional organization within the system.
Why Astrocytic Gating Matters for Aigarth and Decentralized AI
Aigarth is not a static model but an evolutionary tissue through an architecture capable of growing, mutating, pruning, generating functional offspring, and reorganizing its topology under adaptive pressures. In that context, Neuraxon contributes something: a rich computational microphysiology for the units that inhabit that tissue.
This has implications for robustness, adaptability, and memory. Also for scalability. In large architectures, the problem is not only that there are many units, but how to coordinate which parts of the system are available for reconfiguration and which must maintain stability.
In roadmap terms for QUBIC, the goal is to build systems where intelligence emerges not only from neuronal computation, but also from the coupling between fast processing, slow modulation, and structural evolution. You can explore these dynamics firsthand with the interactive Neuraxon 3D simulation on HuggingFace Spaces, where you can build, configure, and simulate a Neuraxon 2.0 network from scratch.
Fig 3. Neuraxon astrocytes gating - AGMP formulation
Scientific References
Allen, N. J., & Eroglu, C. (2017). Cell biology of astrocyte-synapse interactions. Neuron, 96(3), 697–708.Halassa, M. M., Fellin, T., & Haydon, P. G. (2007). The tripartite synapse: Roles for gliotransmission in health and disease. Trends in Molecular Medicine, 13(2), 54–63.Kofuji, P., & Araque, A. (2021). Astrocytes and behavior. Annual Review of Neuroscience, 44, 49–67.=Perea, G., Navarrete, M., & Araque, A. (2009). Tripartite synapses: Astrocytes process and control synaptic information. Trends in Neurosciences, 32(8), 421–431.Woodburn, R. L., Bollinger, J. A., & Wohleb, E. S. (2021). Synaptic and behavioral effects of astrocyte activation. Frontiers in Cellular Neuroscience, 15, 645267.=Vivancos, D. & Sanchez, J. (2026). Neuraxon v2.0: A New Neural Growth & Computation Blueprint. ResearchGate Preprint.
Explore the Full Neuraxon Intelligence Academy
This is Volume 5 of the Neuraxon Intelligence Academy by the Qubic Scientific Team. If you are just joining us, explore the complete series to build a full understanding of the science behind Neuraxon and Qubic's approach to brain-inspired, decentralized artificial intelligence:
NIA Volume 1: Why Intelligence Is Not Computed in Steps, but in Time — Explores why biological intelligence operates in continuous time rather than discrete computational steps like traditional LLMs.NIA Volume 2: Ternary Dynamics as a Model of Living Intelligence — Explains ternary dynamics and why three-state logic (excitatory, neutral, inhibitory) matters for modeling living systems.NIA Volume 3: Neuromodulation and Brain-Inspired AI — Covers neuromodulation and how the brain's chemical signaling (dopamine, serotonin, acetylcholine, norepinephrine) inspires Neuraxon's architecture.NIA Volume 4: Neural Networks in AI and Neuroscience — A deep comparison of biological neural networks, artificial neural networks, and Neuraxon's third-path approach.
Qubic is a decentralized, open-source network for experimental technology. To learn more, visit qubic.org
#Qubic #AGI #Neuraxon #academy #decentralized
🚀 Upcoming Token Unlocks Next Week! A massive $973.66 million worth of tokens is set to be unlocked, with some key projects seeing significant releases. Here’s a breakdown of the most notable unlocks: 🔹 $ENA – Leading the pack with $855.23M unlocked (65.93% of total unlocks). 🔹 $SUI – Unlocking $106.98M (1.24% of total supply). 🔹 $NEON – Releasing $4.12M (11.20% of total unlocks). 🔹 $AGI – Unlocking $1.84M (1.71% of total unlocks). 🔹 $IOTA – Unlocking $1.76M (0.24% of total unlocks). 🔹 $SPELL – Releasing $1.01M (0.83% of total unlocks). These token unlocks could influence market movements, so keeping an eye on them is crucial for investors and traders. Monitor liquidity, price action, and potential impacts as these assets enter circulation. #CryptoUnlocks #ENA #SUI #NEON #AGI
🚀 Upcoming Token Unlocks Next Week!

A massive $973.66 million worth of tokens is set to be unlocked, with some key projects seeing significant releases. Here’s a breakdown of the most notable unlocks:

🔹 $ENA – Leading the pack with $855.23M unlocked (65.93% of total unlocks).

🔹 $SUI – Unlocking $106.98M (1.24% of total supply).
🔹 $NEON – Releasing $4.12M (11.20% of total unlocks).
🔹 $AGI – Unlocking $1.84M (1.71% of total unlocks).
🔹 $IOTA – Unlocking $1.76M (0.24% of total unlocks).
🔹 $SPELL – Releasing $1.01M (0.83% of total unlocks).

These token unlocks could influence market movements, so keeping an eye on them is crucial for investors and traders. Monitor liquidity, price action, and potential impacts as these assets enter circulation.
#CryptoUnlocks #ENA #SUI #NEON #AGI
Login to explore more contents
Join global crypto users on Binance Square
⚡️ Get latest and useful information about crypto.
💬 Trusted by the world’s largest crypto exchange.
👍 Discover real insights from verified creators.
Email / Phone number