The need for AI memory management is indeed real. Every time I switch AI tools, all previous conversation contexts are lost, and I have to explain my needs again, which is extremely annoying. Vanar's myNeutron claims to solve this pain point by moving the memory layer on-chain, allowing AI to remember your preferences and history across platforms and sessions. It sounds great, but after using version 1.3 for two weeks, I found that there is still a gap between ideals and reality.
The core logic of myNeutron is to compress various data sources (web pages, documents, conversations) into "seeds" and then automatically package them into "memory bundles," forming a queryable contextual combination. The update of v1.3 added an automatic packaging feature, theoretically, the AI will automatically identify which memory bundle new seeds belong to without manual organization. The scenario I tested was saving web pages and reports related to blockchain projects to see if it could help me quickly retrieve relevant information.
At first, the experience was quite good. After installing the Chrome extension, I could save useful pages as seeds with just a click. The compression speed was also fast; an article of several thousand words was processed in a few seconds. But soon the problems arose—the accuracy of automatic packaging was really poor. I saved more than ten pieces of content about DeFi protocols, and it automatically divided them into three memory bundles. As a result, information on Uniswap and Curve was placed in different bundles, justified by "different liquidity models." This classification logic made sense, but when I wanted to check "comparison of DEX protocols," it only returned the content of one bundle, and I had to check the other bundle separately, making the experience even more fragmented.
There is an even more awkward question—the compression quality of the seeds is unstable. I saved a technical white paper, the original text had detailed formula derivations, but the formulas in the seed all turned into text descriptions, completely losing precision. Customer service said it was for semantic compression, turning symbols into natural language, but this is a disaster for technical documents. If you are studying AI algorithms or smart contract code, myNeutron is basically unusable; it is only suitable for pure text information, and it handles structured data and code very poorly.
The cross-platform memory feature does actually exist. I connected the seeds to Claude and GPT, and both sides could access the same set of memory bundles without having to re-enter background information. But there is a contradiction here—on-chain storage means that every query has to go through the blockchain, resulting in much higher latency than local storage. I tested it, and querying a memory bundle took an average of 3-5 seconds, and if there were many seeds in the bundle, the delay could reach 10 seconds. This speed is too slow for AI scenarios that require quick interactions, especially for code completion or real-time conversations; it simply can't wait. Vanar emphasizes "the persistence of memory," but sacrificing response speed may not be suitable for all scenarios.
What surprised me the most was the subscription model. myNeutron is currently free, but the official said it will later turn into a subscription, and on-chain activities will incur costs. The specific pricing has not been announced, but since it will go on-chain, gas fees are definitely unavoidable. Vanar uses DPoS consensus, theoretically gas fees should be lower than Ethereum, but the problem is that its on-chain activity is too low, with only 83 DEX transactions per day, and it is completely unpredictable how fees will change during network congestion. If such a basic memory management function requires frequent gas payments, will ordinary users accept it? I doubt it.
Compared to competitors, myNeutron's differentiation lies in on-chain storage and cross-platform functionality, but this is also its disadvantage. Note-taking tools like Notion and Obsidian can also manage knowledge, they are fast, free, and localized; although they cannot cross AI platforms, most people do not need such complex scenarios. Those who truly need on-chain memory may be enterprise-level applications, such as compliance auditing and multi-party collaboration, but the current functions of myNeutron are clearly still aimed at individual users, lacking enterprise-level permission management and data isolation.
I also briefly tried the Kayon inference engine. It can perform natural language queries, such as "Find protocols with TVL growth exceeding 10% in the last 7 days," and the results are indeed more convenient than directly using the API. But the problem is that Kayon's data sources are limited; it mainly grabs public data from DeFiLlama, and on-chain real-time data coverage is incomplete. I asked about "the largest liquidity pool on Vanar's chain," and it directly told me there was no data, citing that Vanar's on-chain browser statistics were incomplete. This is quite ironic; if they can't even find their own chain's data, how can they convince users to use it to check other chains?
There is also a technical detail worth mentioning—the compression algorithm of Neutron is proprietary and not open source. The official reason is to protect intellectual property, but this is quite unusual for blockchain projects. The spirit of blockchain is open source and transparency, and Neutron's core compression logic being closed source raises the question of how users can verify that it has not tampered with the data. Although Vanar emphasizes that on-chain storage is immutable, if the compression process is opaque, the data processing before going on-chain is a black box. This issue is not widely discussed in the tech community, probably because too few people are using it, but if myNeutron really wants to promote on a large scale, open sourcing will eventually face scrutiny.
From a product maturity perspective, myNeutron is still in its early stages. The automatic packaging in v1.3 is a step forward, but both accuracy and usability are insufficient. It is suitable for those who are willing to experiment and have deep needs for AI workflows, but the learning curve is too high for ordinary users. Moreover, Vanar's community support is very weak; there is no active discussion in Discord, and when encountering problems, one basically has to figure things out on their own, which is fatal for product promotion.
On a positive note—myNeutron's Chrome extension is designed quite cleanly, and the operation of saving seeds is very smooth, at least there are no obvious bugs. The concept of cross-platform memory does indeed address a pain point, but its implementation is not yet perfect. If subsequent updates can resolve issues with compression quality, query latency, and data source coverage, and also set reasonable pricing, myNeutron still has potential. But at this stage, it seems more like a technology demonstration rather than a mature product.
My conclusion is that myNeutron is currently only suitable for trying out and not as a primary tool. If your need is to manage a large amount of text information and you are willing to tolerate delays and imperfect categorization, you can give it a try. But if you need to handle structured data, code, or require high-frequency interactions, it is better to stick with traditional tools. The idea of Vanar's AI memory layer is very avant-garde, but the gap between technical implementation and user experience has not yet been bridged. I hope subsequent versions can have substantial improvements and not let a good concept fail in execution.

